Getting to Know PADT: Cube Simulation Computers

This post is the seventh installment in our review of all the different products and services PADT offers our customers. As we add more, they will be available here.  As always, if you have any questions don’t hesitate to reach out to info@padtinc.com or give us a call at 1-800-293-PADT.

“It is done running already? What machine did you run that on? Your desktop? How do I buy one?”  That serious of questions from an ANSYS customer of PADT’s is how CUBE Computers became one of our product offerings.  We offer a complete line of six standard systems to meet the needs of the most demanding, and cost-conscious, users.

The problem is that in the world of advanced numerical simulation, most off-the-shelf computers just don’t perform like they should. They are expensive and are weighed down with unnecessary accessories and slowed down by poor configuration.  Because PADT has been building our own computers for over twenty years for the sole purpose of running simulation models, we know how to configure boxes that are optimized for that sweet spot everyone is looking for.  Our engineers and IT staff work with customers to find the right standard system, or to customize a unique system that is ideal.

We break our standard models into three families: Workstations, Servers, and Cluster Appliances.  Although each type can be heavily customized, we have pre-configured the following systems to make it easy for users to quickly get what they need:

Each machine comes with maintenance and support that is also tuned to the customer’s needs – from basic parts only warranty to same-day on-site support. You can also have us install the system and your simulation software.  Whatever you need, we can deliver.

Over one hundred customers, many who have purchased multiple systems from us over the years, have worked with PADT’s team to obtain an optimized computer system that maximizes the return on their simulation investment. Reach out to our CUBE Computer System team and http://www.directics.com/altera-fpga/ let them help you ” Discover What You Need.”

 

Nerdtoberfest is coming up soon!

Nerdtoberfest, PADT’s annual fall open house is coming up soon!

Join us – Thursday, October 26th, 2017 from 5:00 pm – 8:00 pm MST at 7755 S. Research Drive Tempe AZ, 85281

This year our fall open house will offer attendees a glimpse at some of our core offerings, introductions to a few new additions, and free food and drinks! Come experience this innovative technology first-hand, including:

  • CUBE High Performance Computing (HPC) Systems
  • 3D Scanning
  • FDM Services
  • Stratasys 3D Printers
  • Carbon 3D Printing CLIP Technology *New! 
  • ANSYS Discovery Live *New! 
Join PADT as we open our doors to the public for a celebration of all things engineering and manufacturing in Arizona.

Announcing CoresOnDemand.com – Dedicated Compute Power when you Need It

CoresOnDemand-Logo-120hWe are pleased to announce a new service that we feel is remote solving for FEA and CFD done right: CoresOnDemand.com.  We have taken our   proven CUBE Simulation Computers and built a cluster that users can simply rent.  So you get fast hardware, you get it all to your self, and you receive fantastic support from the ANSYS experts at PADT.

It is not a time share system, it is not a true "cloud" solution.  You tell us how many nodes you need and for how long and we rent them to you. You can submit batch or you can configure the machines however you need them.  Submit on the command line, through a batch scheduler, or run interactive. And when you are done, you do not have to send your files back to your desktop. We've loaded NICE DCV so you can do graphics intense pre- and post-processing from work or home, over the internet to our head nodes.  You can even work through your iPad.

CUBE-HVPC-512-core-closeup3-1000h

If you visit our Blog page a lot, you may have noticed the gray cloud logo with a big question mark next to it. If you guessed that was a hint that we were working on a cloud solution for ANSYS users, you were correct. We've had it up and running for a while but we kept "testing" it with  benchmarks for people buying CUBE computers. Plus we kept tweaking the setup to get the best user experience possible.  With today's announcement we are going live.

We created this service for a simple reason. Customers kept calling or emailing and asking if they could rent time on our machines.  We got started with the hardware but also started surveying and talking to users. Everyone is talking about the cloud and HPC, but we found few providers understood how to deliver the horsepower people needed in a usable way, and that users were frustrated with the offerings they had available. So we took our time and built a service that we would want to use, a service we would find considerable value in.

simulation-hardware ansys-expertise dependability

You can learn more by visiting www.CoresOnDemand.com. Or by reading the official press release included below. To get your started, here are some key facts you should know:

  1. We are running PADT CUBE computers, hooked together with infiniband. They are fast, they are loaded with RAM, and they have a ton of disk space. Since we do this type of solving all the time, we know what is needed
  2. This is a Bring Your Own License (BYOL) service. You will need to lease the licenses you need from whoever you get your ANSYS from.  As an ANSYS Channel partner we can help that process go smoothly.
  3. You do not share the hardware.  If you reserve a node, it is your node. No one else but your company can log in.  You can rent by the week, or the day.
  4. When you are done, we save the data you want us to save and then wipe the machines.  If you want us to save your "image" we can do that for a fee so next time you use the service, we can restore it to right where you were last time.
  5. Right now we are focused on ANSYS software products only. We feel strongly about focusing on what we know and maximizing value to the customers.
  6. This service is backed by PADT's technical support and IT staff. You would be hard pressed to find any other HPC provider out there who knows more about how to run ANSYS Mechanical, ANSYS Mechanical APDL, ANSYS FLUENT, ANSYS CFX, ANSYS HFSS, ANSYS MAXWELL, ANSYS LS-DYNA, ANSYS AUTODYN, ICEM CFD, and much more.

To talk to our team about running your next big job on CoresOnDemand.com contact us at 480-813-4884 or email cod@padtinc.com

CoresOnDemand-ANSYS-CUBE-PADT-1

See the official Press Release here

Press Release:

CoresOnDemand.com Launches as Dedicated ANSYS Simulation
High Performance Cloud Compute Resource 

PADT launches CoresOnDemand.com, a dedicated resource for users who need to run ANSYS simulation software in the cloud on optimized high performance computers.

Tempe, AZ – April 29, 2015 – Phoenix Analysis & Design Technologies, Inc. (PADT), the Southwest’s largest provider of simulation, product development, and 3D Printing services and products, is pleased to announce the launch of a new dedicated high performance compute resource for users of ANSYS simulation software – CoresOnDemand.com.  The team at PADT used their own experience, and the experience of their customers, to develop this unique cloud-based solution that delivers exceptional performance and a superior user experience. Unlike most cloud solutions, CoresOnDemand.com does not use virtual machines, nor do users share compute nodes. With CoresOnDemand.com users reserve one or more nodes for a set amount of time, giving them exclusive access to the hardware, while allowing them to work interactively and to set up the environment the way they want it.

The cluster behind CoresOnDemand.com is built by PADT’s IT experts using their own CUBE Simulation Computers (http://www.padtinc.com/cube), systems that are optimized for solving numerical simulation problems quickly and efficiently. This advantage is coupled with support from PADT’s experienced team, recognized technical experts in all things ANSYS. As a certified ANSYS channel partner, PADT understands the product and licensing needs of users, a significant advantage over most cloud HPC solutions.

“We kept getting calls from people asking if they could rent time on our in-house cluster. So we took a look at what was out there and talked to users about their experiences with trying to do high-end simulation in the cloud,” commented Eric Miller, Co-Owner of PADT. “What we found was that almost everyone was disappointed with the pay-per-cpu-second model, with the lack of product understanding on the part of the providers, and mediocre performance.  They also complained about having to bring large files back to their desktops to post-process. We designed CoresOnDemand.com to solve those problems.”

In addition to exclusive nodes, great hardware, and ANSYS expertise, CoresOnDemand.com adds another advantage by leveraging NICE Desktop Cloud Visualization (https://www.nice-software.com/products/dcv) to allow users to have true interactive connections to the cluster with real-time 3D graphics.  This avoids the need to download huge files or running blind in batch mode to review results. And as you would expect, the network connection and file transfer protocols available are industry standards and encrypted.

The initial cluster is configured with Intel and AMD-based CUBE Simulation nodes, connected through a high-speed Infiniband interconnect.  Each compute node has enough RAM and disk space to handle the most challenging FEA or CFD solves.  All ANSYS solvers and prep/post tools are available for use including: ANSYS Mechanical, ANSYS Mechanical APDL, ANSYS FLUENT, ANSYS CFX, ANSYS HFSS, ANSYS MAXWELL, ANSYS LS-DYNA, ANSYS AUTODYN, ICEM CFD, and much more. Users can serve their own licenses to CoresOnDemand.com or obtain a short-term lease, and PADT’s experts are on hand to help design the most effective licensing solution.

Pre-launch testing by PADT’s customers has shown that this model for remote on-demand solving works well.  Users were able to log in, configure their environment from their desktop at work or home, mesh, solve, and review results as if they had the same horsepower sitting right next to their desk.

To learn more about the CoresOnDemand: visit http://www.coresondemand.com, email cod@padtinc.com, or contact PADT at 480.813.4884. 

About Phoenix Analysis and Design Technologies

Phoenix Analysis and Design Technologies, Inc. (PADT) is an engineering product and services company that focuses on helping customers who develop physical products by providing Numerical Simulation, Product Development, and Rapid Prototyping solutions. PADT’s worldwide reputation for technical excellence and experienced staff is based on its proven record of building long term win-win partnerships with vendors and customers. Since its establishment in 1994, companies have relied on PADT because “We Make Innovation Work. “  With over 75 employees, PADT services customers from its headquarters at the Arizona State University Research Park in Tempe, Arizona, and from offices in Littleton, Colorado, Albuquerque, New Mexico, and Murray, Utah, as well as through staff members located around the country. More information on PADT can be found at http://www.PADTINC.com.

Using Bright CM to Manage a Linux Cluster

COD_Cluster-Bright-1What goes into managing a Linux HPC (High Performance Computing) cluster?

There is an endless list of software, tools and configurations that are required or recommended for efficiently managing a shared HPC cluster environment.

A shared HPC cluster typically has many layers that deliver a usable environment that doesn’t have to  depend on the users coordinating closely or the system administrators being superheroes of late-night patching and just-in-time recovery.

bright-f1

Figure 1 Typical Layers of a shared HPC cluster.

For each layer in the diagram above there are numerous open-source and paid software tools to choose from. The thing to note is that it’s not just a choice. System administrators have to work with the user requirements, compatibility tweaks and ease of implementation and use to come up with a perfect recipe (much like carrot cake). Once the choices have been made, users and system administrators have to train, learn and start utilizing these tools.

HPC @ PADT Inc.

At PADT Inc. we have several Linux based HPC clusters that are in high demand. Our Clusters are based on the Cube High Value Performance Computing (HVPC) systems and are designed to optimize the performance of numerical simulation software. We were facing several challenges that are common with building & maintaining HPC clusters. The challenges were mainly in the areas of security, imaging and deployment, resource management, monitoring and maintenance.

To solve these challenges there is an endless list of software tools and packages both open-source and commercial. Each one of these tools comes with its own steep learning curve and mounting time to test & implement.

Enter – Bright Computing

After testing several tools we came across the Bright Computing – Bright Cluster Manager (Bright CM). Bright CM eliminates the need for system administrators to manually install and configure the most common HPC cluster components. On top of that it provides the majority of the HPC software packages, tools and software libraries in their default software image.

A Bright CM cluster installation starts off with an extremely useful installation wizard that asks all of the right questions while giving the user full control to customize the installation. With a note pad, a couple of hours and a basic understanding of HPC clusters, you are ready to install your applications.

bright-f2

Figure 2. Installation Wizard

An all knowing dashboard helps system admins master and monitor the cluster(s) or if you prefer the CLI CM shell provides full functionality through command line. From the dashboard system admins can manage multiple clusters down to the finest details.

bright-f3

Figure 3. Cluster Management Interface.

An extensive cluster monitoring interface allows systems admins, users and key stakeholders to generate and view detailed reports about the different cluster components.

bright-f4

Figure 4. Cluster Monitoring Interface.

Bright CM has proven to be a valuable tool in managing and optimizing our HPC environment. For further information and a demo of Bright Cluster Manager please contact sales@padtinc.com.

From Piles to Power – My First PADT PC Build

Welcome to the PADT IT Department now build your own PC

[Editors Note: Ahmed has been here a lot longer than 2 weeks, but we have been keeping him busy so he is just now finding the time to publish this. ]

I have been working for PADT for a little over 2 weeks now. After taking the ceremonial office tour that left me with a fine white powder all over my shoes (it’s a PADT Inc special treat). I was taken to meet my team, David Mastel – My Boss for short, who is the IT commander & chief at PADT Inc. and Sam Goff – the all-knowing systems administrator.

I was shown to a cubicle that reminded me of the shady computer “recycling” outfits you’d see on a news report highlighting the vast amounts of abandoned hardware; except there were no CRT (tube) screens or little children working as slave labor.
aa1

Sacred Tradition

This tradition started with Sam, then Manny, and now it was my turn taking this rite of passage. As part of the PADT IT department, I am required by sacred tradition to build my own desktop with my bare hands – then I was handed a screwdriver.

My background is mixed and diverse but mostly has one thing in common. We usually depended on pre-built servers, systems and packages. Branded machines have an embedded promise of reliability, support and superiority over the custom built machines.

  1. What most people don’t know about branded machines is that they carry two pretty heavy tariffs.
  2. First, you are paying upfront for the support structure, development, R&D, supply chains that are required to pump out thousands of machines.
  3. Second, because these large companies are trying to maximize their margins, they will look for a proprietary cost effective configuration that will:
    1. Most probably fail or become obsolete as close as possible to the 3-year “expected” life-span of computers.
    2. Lock users into buying any subsequent upgrade or spare part from them.

Long Story short, the last time I fully built a desktop computer was back in college when a 2GB hard disk was a technological breakthrough that we could only imagine how many MP3’s we could store on it.

The Build

There were two computer cases on the ground, one resembled a 1990 Mercury Sable that was at most tolerable as a new car and the other looked more like 1990 BMW 325ci a little old but carries a heritage and potential to be great once again.
aa2

So with my obvious choice for a case I began to collect parts from the different bins and drawers and I was immediately shocked at how “organized” this room really was. So I picked up the following:

There are a few things that I would have chosen differently but were not available at the time of the build or were ridiculous for a work desktop would be:

  • Replaced 2 drives with SSD disks to hold OS and applications
  • Explored a more powerful Nvidia card (not really required but desired)

So after a couple of hours of fidgeting and checking manuals this is what the build looks like.
aa3

(The case above was the first prototype ANSYS Numerical Simulation workstation in 2010. It has a special place in David’s Heart)

Now to the Good STUFF! – Benchmarking the rebuilt CUBE prototype

ANSYS R15.0.7 FEA Benchmarks

Below are the results for the v15sp5 benchmark running distributed parallel on 4-Cores.
aa4

ANSYS R15.0.7 CFD Benchmarks

Below are the results for the aircraft_2m benchmark using parallel processing on 4-Cores.
aa5

This machine is a really cool sleeper computer that is more than capable at whatever I throw at it.

The only thing that worries me is that when Sam handed me the case to get started, David was trying –but failed- to hide a smile that makes me feel that there is something obviously wrong in my first build and I failed to catch it. I guess I will just wait and see.

geoCUBE: Computers for Scanning

PADT just released a line of computer workstations  specifically designed for use with a variety of optical scanners: geoCUBE Scanning Workstations.

Scanning technology has come a long way.  It is relatively easy to scan a real physical part with a variety of different scanning technologies and capture the geometry for use in inspection, design, reverse engineering, or to directly replicate a part with 3D Printing.  The problem is that a good scanner produces a huge  number of data points and a standard office computer, laptop, or even most CAD workstations bog down and perhaps even crash when you try to view or manipulate that much data.  

geocube-hardware-picsWhen we ran into that exact problem here at PADT when we were doing scanning services for customers.  On a nice CAD workstation it was taking almost a whole day to clean up and process a full scan or a large part.  Our manufacturing team asked if they could power one of the CUBE Simulation Computers we use for CFD.  If you know CFD people you know they said “No, but can I also run on your box if you are not using it?”  So they went to our IT staff, the people who design CUBE systems and asked for a custom built machine for scanning.

The result was a breakthrough.  That 20 hour job was finishing in about two hours and we were able to spin the points and the resulting triangle file around on the screen in real time. We liked it so much we decided to come up with four systems spanning the needs of scanning users, and offer them along with the scanner we sell, or to anyone that might need one.

Below is a screen shot of the table showing the four systems, from a basic small box that you can use to drive your scanner, to the power system that we use.  You can download the brochure here, or visit the web page here

geoCUBE-Spec-Table-Screen-Shot

As always, feel free to contact us to get more information and see how we can help you find the right scanner and the perfect computer to go with it.

Still Time to Attend an ANSYS User Group Conference

conference-2014-logoApril is almost over, and you know what that means? It’s time for the ANSYS Convergence Regional Conference to begin.  These free events are held once a year and are an opportunity for the entire spectrum of ANSYS users to get together for one day. Each event is a bit different, but the goal is the same:  Users share presentations on what they have done and the experts from ANSYS, Inc. share what is new and exciting with the products.  

These events are technical in nature, with a general session followed by specific technical tracks.  

conf2And PADT will be at the Santa Clara and Houston events this year, highlighting our services and products and presenting in Santa Clara.

The four US events are:

There are also 12 events in Asia, 12 in Europe, 7 in Latin America, and 7 in  the Africa/Middle East region.
See the full list here.

Remember, it’s free and always educational.  Even in our modern world of blogs, forums, and webinars, it is valuable to just spend some time talking with experts and other users.

PADT is a “Silver Sponsor” so we would love to see you there!

CUBE Systems are Now Part of the ANSYS, Inc. HPC Partner Program

CUBE-HVPC-Logo-wide_thumb.png

The relationship between ANSYS, Inc. and PADT is a long one that runs deep. And that relationship just got stronger with PADT joining the HPC Partner Program with our line of CUBE compute systems specifically designed for simulation. The partner program was set up by ANSYS, Inc. to work:

CUBE-HVPC-512-core-closeup3-1000h_thumb.jpg“… with leaders in high-performance computing (HPC) to ensure that the engineering simulation software is optimized on the latest computing platforms. In addition, HPC partners work with ANSYS to develop specific guidelines and recommended hardware and system configurations. This helps customers to navigate the rapidly changing HPC landscape and acquire the optimum infrastructure for running ANSYS software. This mutual commitment means that ANSYS customers get outstanding value from their overall HPC investment.”

CUBE-HVPC-512-core-stairs-1000h_thumb.jpg

PADT is very excited to be part of this program and to contribute to the ANSYS/HPC community as much as we can.  Users know they can count on PADT’s strong technical expertise with ANSYS Mechanical, ANSYS Mechanical APDL, ANSYS FLUENT, ANSYS CFX, ANSYS Maxwell, ANSYS HFSS, and other ANSYS, Inc. products, a true differentiator when compared with other hardware providers.

Customers around the US have fallen in love with their CUBE workstations, servers, mini-clusters, and clusters finding them to be the right mix between price and performance. CUBE systems let users carry out larger simulations, with greater accuracy, in less time, at a lower cost than name-brand solutions. This leaves you more cash to buy more hardware or software.

Assembled by PADT’s IT staff, CUBE computing systems are delivered with the customer’s simulation software loaded and tested. We configure each system specifically for simulation, making choices based upon PADT’s extensive experience using similar systems for the same kind of work. We do not add things a simulation user does not need, and focus on the hardware and setup that delivers performance.

CUBE-HVPC-512-core-front1-1000h_thumb.jpg

Is it time for you to upgrade your systems?  Is it time for you to “step out of the box, and step in to a CUBE?”  Download a brochure of typical systems to see how much your money can actually buy, visit the website, or contact us.  Our experts will spend time with you to understand your needs, your budget, and what your true goals are for HPC. Then we will design your custom system to meet those needs.

 

This May Be the Fastest ANSYS Mechanical Workstation we Have Built So Far

The Build Up

Its 6:30am and a dark shadow looms in Eric’s doorway. I wait until Eric finishes his Monday morning company updates. “Eric check this out, the CUBE HVPC w16i-k20x we built for our latest customer ANSYS Mechanical scaled to 16 cores on our test run.” The left eyebrow of Eric’s slightly rises up. I know I have him now I have his full and complete attention.

Why is this huge news?

This is why; Eric knows and probably many of you reading this also know that solving differential equations, distributed, parallel along with using graphic processing unit makes our hearts skip a beat. The finite element method used for solving these equations is CPU intensive and I/O intensive. This is headline news type stuff to us geek types. We love scratching our way along the compute processing power grids to utilize every bit of performance out of our hardware!

Oh and yes a lower time to solve is better! No GPU’s were harmed in this tests. Only one NVIDIA TESLA k20X GPU was used during the test.

Take a Deep Breath and Start from the Beginning:

I have been gathering and hording years’ worth of ANSYS mechanical benchmark data. Why? Not sure really after all I am wanna-be ANSYS Analysts. However, it wasn’t until a couple weeks ago that I woke up to the why again. MY CUBE HVPC team sold a dual socket INTEL Ivy bridge based workstation to a customer out of Washington state. Once we got the order, our Supermicro reseller‘s phone has been bouncing of the desk. After some back and forth, this is how the parts arrive directly from Supermicro, California. Yes, designed in the U.S.A.  And they show up in one big box:

clip_image002[4]

Normal is as Normal Does

As per normal is as normal does, I ran the series of ANSYS benchmarks. You know the type of benchmarks that perform coupled-physics simulations and solving really huge matrix numbers. So I ran ANSYS v14sp-5, ANSYS FLUENT benchmarks and some benchmarks for this customer, the types of runs they want to use the new machine for. So I was talking these benchmark results over with Eric. He thought that now is a perfect time to release the flood of benchmark data. Well some/a smidge of the benchmark data. I do admit the data does get overwhelming so I have tried to trim down the charts and graphs to the bare minimum. So what makes this workstation recipe for the fastest ANSYS Mechanical workstation so special? What is truly exciting enough to tip me over in my overstuffed black leather chair?

The Fastest Ever? Yup we have been Changed Forever

Not only is it the fastest ANSYS Mechanical workstation running on CUBE HVPC hardware.  It uses two INTEL CPU’s at 22 nanometers. Additionally, this is the first time that we have had an INTEL dual socket based workstation continue to gain faster times on and up to its maximum core count when solving in ANSYS Mechanical APDL.

Previously the fastest time was on the CUBE HVPC w16i-GPU workstation listed below. And it peaked at 14 cores. 

Unfortunately we only had time before we shipped the system off to gather two runs: 14 and 16 cores on the new machine. But you can see how fast that was in this table.  It was close to the previous system at 14 cores, but blew past it at 16 whereas the older system actually got clogged up and slowed down:

  Run Time (Sec)
Cores Used Config B Config C Config D
14 129.1 95.1 91.7
16 130.5 99 83.5

And here are the results as a bar graph for all the runs with this benchmark:

CUBE-Benchmark-ANSYS-2013_11_01

  We can’t wait to build one of these with more than one motherboard, maybe a 32 core system with infinband connecting the two. That should allow some very fast run times on some very, very large problems.

ANSYS V14sp-5 ANSYS R14 Benchmark Details

  • Elements : SOLID187, CONTA174, TARGE170
  • Nodes : 715,008
  • Materials : linear elastic
  • Nonlinearities : standard contact
  • Loading : rotational velocity
  • Other : coupling, symentric, matrix, sparse solver
  • Total DOF : 2.123 million
  • ANSYS 14.5.7

Here are the details and the data of the March 8, 2013 workstation:

Configuration C = CUBE HVPC w16i-GPU

  • CPU: 2x INTEL XEON e5-2690 (2.9GHz 8 core)
  • GPU: NVIDIA TESLA K20 Companion Processor
  • GRAPHICS: NVIDIA QUADRO K5000
  • RAM: 128GB DDR3 1600Mhz ECC
  • HD RAID Controller: SMC LSI 2208 6Gbps
  • HDD: (os and apps): 160GB SATA III SSD
  • HDD: (working directory):6x 600GB SAS2 15k RPM 6Gbps
  • OS: Windows 7 Professional 64-bit, Linux 64-bit
  • Other: ANSYS R14.0.8 / ANSYS R14.5

Here are the details from the new, November 1, 2013 workstation:

Configuration D = CUBE HVPC w16i-k20x

  • CPU: 2x INTEL XEON e5-2687W V2 (3.4GHz)
  • GPU: NVIDIA TESLA K20X Companion Processor
  • GRAPHICS: NVIDIA QUADRO K4000
  • RAM: 128GB DDR3 1600Mhz ECC
  • HDD: (os and apps): 4 x 240GB Enterprise Class Samsung SSD 6Gbps
  • HD RAID CONTROLLER: SMC LSI 2208 6Gbps
  • OS: Windows 7 Professional 64-bit, Linux 64-bit
  • Other: ANSYS 14.5.7

You can view the output from the run on the newer box (Configuration D) here:

Here is a picture of the Configuration D machine with the info on its guts:

clip_image006[4]clip_image008[4]

What is Inside that Chip:

The one (or two) CPU that rules them all: http://ark.intel.com/products/76161/

Intel® Xeon® Processor E5-2687W v2

  • Status: Launched
  • Launch Date: Q3’13
  • Processor Number: E5-2687WV2
  • # of Cores: 8
  • # of Thread: 16
  • Clock Speed: 3.4 GHz
  • Max Turbo Frequency: 4 GHz
  • Cache:  25 MB
  • Intel® QPI Speed:  8 GT/s
  • # of QPI Link:  2
  • Instruction Se:  64-bit
  • Instruction Set Extension:  Intel® AVX
  • Embedded Options Available:  No
  • Lithography:  22 nm
  • Scalability:  2S Only
  • Max TDP:  150 W
  • VID Voltage Range:  0.65–1.30V
  • Recommended Customer Price:  BOX : $2112.00, TRAY: $2108.00

The GPU’s that just keep getting better and better:

Features

TESLA C2075

TESLA K20X

TESLA K20

Number and Type of GPU

FERMI

Kepler GK110

Kepler GK110

Peak double precision floating point performance

515 Gflops

1.31 Tflops

1.17 Tflops

Peak single precision floating point performance

1.03 Tflops

3.95 Tflops

3.52 Tflops

Memory Bandwidth (ECC off)

144 GB/sec

250 GB/sec

208 GB/sec

Memory Size (GDDR5)

6GB

6GB

5GB

CUDA Cores

448

2688

2496

clip_image012[4]

Ready to Try one Out?

If you are as impressed as we are, then it is time for you to try out this next iteration of the Intel chip, configured for simulation by PADT, on your problems.  There is no reason for you to be using a CAD box or a bloated web server as your HPC workstation for running ANSYS Mechanical and solving in ANSYS Mechanical APDL.  Give us a call, our team will take the time to understand the types of problems you run, the IT environment you run in, and custom configure the right system for you:

http://www.padtinc.com/products/hardware/cube-hvpc,
email: garrett.smith@padtinc.com,
or call 480.813.4884

Columbia: PADT’s Killer Kilo-Core CUBE Cluster is Online

iIn the back of PADT’s product development lab is a closet.  Yesterday afternoon PADT’s tireless IT team crammed themselves into the back of that closet and powered up our new cluster, bringing 1104 connected cores online.  It sounded like a jet taking off when we submitted a test FLUENT solve across all the cores.  Music to our ears.

We have recently become slammed with benchmarks for ANSYS and CUBE customers as well as our normal load of services work, so we decided it was time to pull the trigger and double the size of our cluster while adding a storage node.  And of course, we needed it yesterday.  So the IT team rolled up their sleeves, configured a design, ordered hardware, built it up, tested it all, and got it on line, in less than two weeks.  This was while they did their normal IT work and dealt with a steady stream of CUBE sales inquiries.  But it was a labor of love. We have all dreamed about breaking that thousand core barrier on one system, and this was our chance to make it happen.

If you need more horsepower and are looking for a solution that hits that sweet spot between cost and performance, visit our CUBE page at www.cube-hvpc.com and learn more about our workstations, servers, and clusters.  Our team (after they get a little rest) will be more than happy to work with you to configure the right system for your real world needs.

Now that the sales plug is done, lets take a look at the stats on this bad boy:

Name: Columbia
After the class of battlestars in Battlestar Galactica
Brand: CUBE High Value Performance Compute Cluster, by PADT
Nodes: 18
17 compute, 1 storage/control node, 4 CPU per Node
Cores: 1104
AMD Opteron: 4 x 6308 3.5 GHz, 32 x 6278 2.4 GHz, 36 x 6380 2.5 GHz
Interconnect: 18 port MELLANOX IB 4X QDR Infiniband switch
Memory: 4.864 Terabytes
Solve Disk: 43.5 TB RAID 0
Storage Disk: 64 TB RAID 50

Here are some pictures of the build and the final product:

a
A huge delivery from our supplier, Supermicro, started the process. This was the first pallet.

b
The build included installing the largest power strip any of us had ever seen.

c
Building a cluster consists of doing the same thing, over and over and over again.

f
We took over PADT’s clean room because it turns out you need a lot of space to build something this big.

g
It is fun to get the chance to build the machine you always wanted to build

h
2AM Selfie: Still going strong!

d
Almost there. After blowing a breaker, we needed to wait for some more
power to be routed to the closet.

e
Up and running!
Ratchet and Clank providing cooling air containment.

David, Sam, and Manny deserve a big shout-out for doing such a great job getting this thing up and running so fast!

When I logged on to my first computer, a TRS-80, in my high-school computer lab, I never, ever thought I would be running on a machine this powerful.  And I would have told people they were crazy if they said a machine with this much throughput would cost less than $300,000.  It is a good time to be a simulation user!

Now I just need to find a bigger closet for when we double the size again…

CUBE-HVPC-Logo-wide

2000 Core Milestone Passed for CUBE HVPC Systems

IMG_9548As we put the finishing touches on the latest 512 core CUBE HVPC cluster, PADT is happy to report that there are now 2,042 cores worth of High Value Performance Computing (HVPC) power out there in the form of PADT’s CUBE computer systems.  That is 2,042 Intel or AMD cores crunching away in workstations, compute servers, and mini-clusters chugging on CFD, Explicit Dynamics, and good old fashioned structural models – producing more accurate results in less time for less cost.

When PADT started selling CUBE HVPC systems it was for a very simple reason: our customers wanted to buy more compute horsepower but they could not afford it within their existing budgets. They saw the systems we were using and asked if we could build one for them.  We did. And now we have put together enough systems to get to 2,042 cores and over 9.5TB of RAM.

CUBE-HVPC-512-core-closeup3-1000h

Our Latest Cluster is Ready to Ship

We just finished testing ANSYS, FLUENT, and HFSS on our latest build, a 512 core AMD based cluster. IT is a nice system:

  • 512 2.5GHz AMD Opteron 6380 Processors: 16 cores per chip, 4 chips per node, 8 nodes
  • 2,048 GB RAM, 256GB per node, 8 nodes
  • 24 TB disk space – RAID0:  3TB per node, 8 nodes
  • 16 Port 40Gbps Infiniband Switch (so they can connect to their older cluster as well)
  • Linux

All for well under $180,000.

It was so pretty that we took some time to take some nice images of it (click to see the full size):

CUBE-HVPC-512-core-front1-1000h CUBE-HVPC-512-core-service1-1000h CUBE-HVPC-512-core-stairs-1000h

And it sounded so awesome, that we took this video so everyone can here it spooling up on an FLUENT benchmark:

If that made you smile, you are a simulation geek!

Next we are building two 64 core compute servers, for another repeat customer, with an Infiniband switch to hook up to their two existing CUBE systems. This will get them to a 256 core cluster.

We will let you know when we get to 5000 cores out there!

Are you ready to step out of the box, and step into a CUBE?  Contact us to get a quote for your next simulation workstation, compute server, or cluster.

Building CUBE Mini-Clusters in the Clean Room

It is a busy time in the world of CUBE computers. We are building our own new cluster, replacing a couple of older file servers we bought from “those other guys” and building a 128 core mini-cluster for a new CUBE customer.  We ran out of room in the IT cubicle so we looked around and found that PADT’s clean room was not being used.  A few tables and tools later and we had a mini-cluster assembly facility.

CUBE-HVPC-CleanRoom_Asembly

With the orders that customers have told us are on the way before the end of the year, this is going to be a busy area through December.