There is a closet in the back of PADT’s product development lab. It does not store empty boxes, old files, or obsolete hardware. Within that closet is a monster. Not the sort of monster that scares little children at night. No, this is a monster that puts fear into the heart of those who try to paint high performance computing as a difficult and expensive task only to be undertaking by those who are in the priesthood. It makes salespeople who earn fat commissions by selling consulting services and unnecessary add-ons quake in fear.
This closet holds PADT’s latest upgrade to our compute infrastructure: a 512 core CUBE HVPC Cluster. No data center, no special consultants, no expensive add-ons. Just 512 cores chugging away at solving FLUENT and CFX problems, and pumping a large amount of heat up into the ceiling.
Here are the specifics:
CUBE C512 Columbia Class Cluster
- 512 AMD 2.4GHz Cores (in 8 nodes, 4 sockets per node, 16 cores per socket)
- 2TB RAM (256 GB per node of DDR3 1600 ECC RAM)
- Raid Controller Card (1 per node)
- 24TB Data Disk Space (3TB per node of SAS2 15k drives in RAID0)
- Infiniband (8 Port switch, 40 Gbps)
- 52 Port GIGE switch connected to 2 GIGE ports per node
- 42 U Rack with thermal convection ducting (chimney)
- Keyboard, monitor, mouse in drawer
- CENTOS (switching to RedHat soon)
We built this system with CFD simulation in mind. The original goal was to provide a proof of concept to expand our CUBE HVPC offering, showing that you can create a cluster of this size, with very good speed, for a price that small and medium sized companies can afford. We also needed a way to run large problems for benchmarks in support of our ANSYS sales efforts and to provide faster technical support our FLUENT and CFX customers. We already have a growing queue of benchmarks waiting to get into the machine.
The image above is the glamour shot. Here is what it looks like in the closet:
Keeping with our theme of High Value Performance Computing we stuck it into this closet that was built for telephone equipment and networking equipment back at the turn of the century when Motorola had this suite. We were able to fit a modern rack in next to an old rack that was in there. We then used the included duct to push the air up into our ceiling space and moved the A/C ducting to duct right into the front of the units. We did need to keep the flow going into the rack instead of into the area under the networking and telephone switches, so we used an old video game poster:
Anyone remember Ratchet and Clank?
Best PS2 games ever.
It works well and adds a little color to the closet.
So far our testing has shown some great numbers. Not the fastest cluster out there, but if you look at the cost, it offers incredible performance. You could add a drive array over Infiniband, faster chips, and some redundant power. And it will run faster and more reliably, but it will cost much more. We are cheap so we like this solution.
Oh yea, with the parts from our old CFD cluster and some new bits, we will be building a smaller mini-cluster using INTEL chips, a GPU or two, and a ton of fast disk and RAM as our FEA cluster. Look for an update on that in a couple of months.
Interested in getting a cluster like this for you computing pleasure? A system configured like this one will run about $150,000 (video game poster is extra). Visit our CUBE page to learn more or just shoot an email to firstname.lastname@example.org. Don’t worry, we don’t sell these with sales people, someone from IT will get back with you.