[The following is an email that Manoj sent the tech support staff at PADT. I thought is was perfect for a The Focus posting, so here it is – Eric]
First of all I found out a way to get Mesh Generation time (if no one knew about this). In ANSYS Mechanical go to Tools->Options->Miscellaneous and turn “Report Performance Diagnostics in Messages” to Yes. It will give you “Elapsed Time for Last Mesh Generation” in the Messages window.
Next I did a benchmark on the Parallel Part by Part meshing of a Helicopter Rotor Hub with 502 bodies. The mesh settings were getting a mesh of about 560,026 elements and 1.23 million nodes.
I did Parallel Part by Part Meshing on this model with 1,2,4,6 and 8 cores and here are the results.
Of course this is a small mesh so as the number of cores goes up, the benefits go down. I will be doing some testing on some models that take a lot longer to mesh but wanted to start simple. I’ll make a video summarizing that study showing how to set up the whole process and the results.
If you are curious, Manoj is running on a PADT CUBE server. As configured it would cost around $19k. You could drop a few thousand of the price if you changed up cards or went with CPU’s that were not so leading edge.
Here are the SPECs:
CUBE HVPC w8i-KGPU
CUBE Mid-Tower Chassis – 26db quiet edition
Two XEON e5-2637 v2 (4 cores per, 3.5GHz each)
128 GB of DDR3-1600 ECC Reg RAM
NVIDIA QUADRO K5000
NVIDIA TESLA K20x
7.1 HD Audio (to really rock your webinars…)
SMC LSI 2208 RAID Card – 6Gbps
OS Drive: 2 x 256GB SSD 6gbps
Solver Array: 3 x 600GB SAS2 15k RPM 6Gbps
“… with leaders in high-performance computing (HPC) to ensure that the engineering simulation software is optimized on the latest computing platforms. In addition, HPC partners work with ANSYS to develop specific guidelines and recommended hardware and system configurations. This helps customers to navigate the rapidly changing HPC landscape and acquire the optimum infrastructure for running ANSYS software. This mutual commitment means that ANSYS customers get outstanding value from their overall HPC investment.”
PADT is very excited to be part of this program and to contribute to the ANSYS/HPC community as much as we can. Users know they can count on PADT’s strong technical expertise with ANSYS Mechanical, ANSYS Mechanical APDL, ANSYS FLUENT, ANSYS CFX, ANSYS Maxwell, ANSYS HFSS, and other ANSYS, Inc. products, a true differentiator when compared with other hardware providers.
Customers around the US have fallen in love with their CUBE workstations, servers, mini-clusters, and clusters finding them to be the right mix between price and performance. CUBE systems let users carry out larger simulations, with greater accuracy, in less time, at a lower cost than name-brand solutions. This leaves you more cash to buy more hardware or software.
Assembled by PADT’s IT staff, CUBE computing systems are delivered with the customer’s simulation software loaded and tested. We configure each system specifically for simulation, making choices based upon PADT’s extensive experience using similar systems for the same kind of work. We do not add things a simulation user does not need, and focus on the hardware and setup that delivers performance.
Is it time for you to upgrade your systems? Is it time for you to “step out of the box, and step in to a CUBE?” Download a brochure of typical systems to see how much your money can actually buy, visit the website, or contact us. Our experts will spend time with you to understand your needs, your budget, and what your true goals are for HPC. Then we will design your custom system to meet those needs.
Its 6:30am and a dark shadow looms in Eric’s doorway. I wait until Eric finishes his Monday morning company updates. “Eric check this out, the CUBE HVPC w16i-k20x we built for our latest customer ANSYS Mechanical scaled to 16 cores on our test run.” The left eyebrow of Eric’s slightly rises up. I know I have him now I have his full and complete attention.
Why is this huge news?
This is why; Eric knows and probably many of you reading this also know that solving differential equations, distributed, parallel along with using graphic processing unit makes our hearts skip a beat. The finite element method used for solving these equations is CPU intensive and I/O intensive. This is headline news type stuff to us geek types. We love scratching our way along the compute processing power grids to utilize every bit of performance out of our hardware!
Oh and yes a lower time to solve is better! No GPU’s were harmed in this tests. Only one NVIDIA TESLA k20X GPU was used during the test.
Take a Deep Breath and Start from the Beginning:
I have been gathering and hording years’ worth of ANSYS mechanical benchmark data. Why? Not sure really after all I am wanna-be ANSYS Analysts. However, it wasn’t until a couple weeks ago that I woke up to the why again. MY CUBE HVPC team sold a dual socket INTEL Ivy bridge based workstation to a customer out of Washington state. Once we got the order, our Supermicro reseller‘s phone has been bouncing of the desk. After some back and forth, this is how the parts arrive directly from Supermicro, California. Yes, designed in the U.S.A. And they show up in one big box:
Normal is as Normal Does
As per normal is as normal does, I ran the series of ANSYS benchmarks. You know the type of benchmarks that perform coupled-physics simulations and solving really huge matrix numbers. So I ran ANSYS v14sp-5, ANSYS FLUENT benchmarks and some benchmarks for this customer, the types of runs they want to use the new machine for. So I was talking these benchmark results over with Eric. He thought that now is a perfect time to release the flood of benchmark data. Well some/a smidge of the benchmark data. I do admit the data does get overwhelming so I have tried to trim down the charts and graphs to the bare minimum. So what makes this workstation recipe for the fastest ANSYS Mechanical workstation so special? What is truly exciting enough to tip me over in my overstuffed black leather chair?
The Fastest Ever? Yup we have been Changed Forever
Not only is it the fastest ANSYS Mechanical workstation running on CUBE HVPC hardware. It uses two INTEL CPU’s at 22 nanometers. Additionally, this is the first time that we have had an INTEL dual socket based workstation continue to gain faster times on and up to its maximum core count when solving in ANSYS Mechanical APDL.
Previously the fastest time was on the CUBE HVPC w16i-GPU workstation listed below. And it peaked at 14 cores.
Unfortunately we only had time before we shipped the system off to gather two runs: 14 and 16 cores on the new machine. But you can see how fast that was in this table. It was close to the previous system at 14 cores, but blew past it at 16 whereas the older system actually got clogged up and slowed down:
Run Time (Sec)
And here are the results as a bar graph for all the runs with this benchmark:
We can’t wait to build one of these with more than one motherboard, maybe a 32 core system with infinband connecting the two. That should allow some very fast run times on some very, very large problems.
ANSYS V14sp-5 ANSYS R14 Benchmark Details
Elements : SOLID187, CONTA174, TARGE170
Nodes : 715,008
Materials : linear elastic
Nonlinearities : standard contact
Loading : rotational velocity
Other : coupling, symentric, matrix, sparse solver
Total DOF : 2.123 million
Here are the details and the data of the March 8, 2013 workstation:
The GPU’s that just keep getting better and better:
Number and Type of GPU
Peak double precision floating point performance
Peak single precision floating point performance
Memory Bandwidth (ECC off)
Memory Size (GDDR5)
Ready to Try one Out?
If you are as impressed as we are, then it is time for you to try out this next iteration of the Intel chip, configured for simulation by PADT, on your problems. There is no reason for you to be using a CAD box or a bloated web server as your HPC workstation for running ANSYS Mechanical and solving in ANSYS Mechanical APDL. Give us a call, our team will take the time to understand the types of problems you run, the IT environment you run in, and custom configure the right system for you:
Note: The information and data contained in this article was complied and generated on September 12, 2013 by PADT, Inc. on CUBE HVPC hardware using FLUEN 14.5.7. Please remember that hardware and software change with new releases and you should always try to run your own benchmarks, on your own typical problems, to understand how performance will impact you.
By David Mastel
Due to the response to the original article on this subject, I thought it would be good to do a quick follow-up using one of our latest CUBE HVPC builds. Again, the ANSYS Fluent standard benchmarks were used in garnering the stats on this dual socket INTEL XEON e5-2667V2 configuration.
CUBE HVPC Test configurations (Same as in last comparison)
Release ANSYS FLUENT 14.5.7 Test Cases
(20 Iterations each)
Reacting Flow with Eddy Dissipation Model (eddy_417k)
Single-stage Turbomachinery Flow (turbo_500k)
External Flow Over an Aircraft Wing (aircraft_2m)
External Flow Over a Passenger Sedan (sedan_4m)
External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m)
External Flow Over a Truck Body 14m (truck_14m)
Here are the results for all three machines, total and average time:
Summary: Are you sure? Part 2
So I didn’t have to have the “Are you sure?” question with Eric this time and I didn’t bother triple checking the results because indeed, the Ivy Bridge-EP Socket 2011 is one fast CPU! That combined with a 0.022 micron manufacturing process the data speaks for itself. For example, lets re-dig into the data for the External Flow Over a Truck Body with a Polyhedral Mesh (truck_poly_14m) benchmark and see what we find:
Current Pricing of INTEL® and AMD® CPU’s
Here is the up to the minute pricing for each CPU’s. I took these prices off of NewEgg and IngramMicro’s website. The date of the monetary values was captured on October 4, 2013.
Note AMD’s price per CPU went up and the INTEL XEON e5-2690 went down. Again, these prices based on today’s pricing, October 4, 2013.
AMD Opteron 6308 Abu Dhabi 3.5GHz 4MB L2 Cache 16MB L3 Cache Socket G34 115W Quad-Core Server Processor OS6308WKT4GHKWOF
PADT offers a line of high performance computing (HPC) systems specifically designed for CFD and FEA number crunching aimed at a balance between cost and performance. We call this concept High Value Performance Computing, or HVPC. These systems have allowed PADT and our customers to carry out larger simulations, with greater accuracy, in less time, at a lower cost than name-brand solutions. This leaves you more cash to buy more hardware or software.
In the back of PADT’s product development lab is a closet. Yesterday afternoon PADT’s tireless IT team crammed themselves into the back of that closet and powered up our new cluster, bringing 1104 connected cores online. It sounded like a jet taking off when we submitted a test FLUENT solve across all the cores. Music to our ears.
We have recently become slammed with benchmarks for ANSYS and CUBE customers as well as our normal load of services work, so we decided it was time to pull the trigger and double the size of our cluster while adding a storage node. And of course, we needed it yesterday. So the IT team rolled up their sleeves, configured a design, ordered hardware, built it up, tested it all, and got it on line, in less than two weeks. This was while they did their normal IT work and dealt with a steady stream of CUBE sales inquiries. But it was a labor of love. We have all dreamed about breaking that thousand core barrier on one system, and this was our chance to make it happen.
If you need more horsepower and are looking for a solution that hits that sweet spot between cost and performance, visit our CUBE page at www.cube-hvpc.com and learn more about our workstations, servers, and clusters. Our team (after they get a little rest) will be more than happy to work with you to configure the right system for your real world needs.
Now that the sales plug is done, lets take a look at the stats on this bad boy:
CUBE High Value Performance Compute Cluster, by PADT
17 compute, 1 storage/control node, 4 CPU per Node
AMD Opteron: 4 x 6308 3.5 GHz, 32 x 6278 2.4 GHz, 36 x 6380 2.5 GHz
18 port MELLANOX IB 4X QDR Infiniband switch
43.5 TB RAID 0
64 TB RAID 50
Here are some pictures of the build and the final product:
A huge delivery from our supplier, Supermicro, started the process. This was the first pallet.
The build included installing the largest power strip any of us had ever seen.
Building a cluster consists of doing the same thing, over and over and over again.
We took over PADT’s clean room because it turns out you need a lot of space to build something this big.
It is fun to get the chance to build the machine you always wanted to build
2AM Selfie: Still going strong!
Almost there. After blowing a breaker, we needed to wait for some more
power to be routed to the closet.
Up and running!
Ratchet and Clank providing cooling air containment.
David, Sam, and Manny deserve a big shout-out for doing such a great job getting this thing up and running so fast!
When I logged on to my first computer, a TRS-80, in my high-school computer lab, I never, ever thought I would be running on a machine this powerful. And I would have told people they were crazy if they said a machine with this much throughput would cost less than $300,000. It is a good time to be a simulation user!
Now I just need to find a bigger closet for when we double the size again…
Real World Lessons on How to Minimize Run Time for ANSYS HPC
Recently I had a VP of Engineering start a phone conversation with me that went something like this. “Well Dave, you see this is how it is. We just spent a truckload of money on a 256 core cluster and our solve times are slower now than with our previous 128 core cluster. What the *&(( is going on here?!”
I imagine many of us have heard similar stories or even received the same questions from our co-workers, CEO’s & Directors. I immediately had my concerns and I truly thought carefully as to what I should say next. I recalled a conversation I had with one of my college professors. He had told me that when I find myself stepping into gray areas that a good start to the conversation is to say. “Well it depends…”
Guess what, that is exactly what I said. I said “Well it depends…” followed by going into explaining to him two fundamental pillars of computer science that have plagued most of us since computers were created: I said “Well you may be, CPU bound (compute bound) or I/O bound. He told me that they had paid a premium for the best CPU’s on the market and some other details about the HPC cluster. Garnering some of other details about the cluster my hunch was that his HPC cluster may actually be I/O bound.
Basically this means that your cluster’s $2,000 worth of CPU’s are basically stalled out and sitting idle. The CPU’s are waiting for new data to process and move on. I also briefly explained that his HPC cluster may be compute bound. I quickly reassured him that the likelihood of his HPC cluster being compute bound was about 10% possible and very unlikely. I knew the specifications on the CPU’s in this HPC cluster and the likelihood that they were the issue of his ANSYS slow run times was low on my radar. These literally were the latest and greatest CPU’s ever to hit this planet (at that moment in time). So, let me step back a minute, to refresh our memories on what it means when a system is compute bound.
Being compute bound means that the HPC cluster’s CPU’s were sitting at 99 or 100% for long periods of time. When this happens very bad things begin to happen to your HPC cluster. CPU requests to peripherals are delayed or infinitely lost to the ether. The HPC cluster may become unresponsive and even lock up.
All I could hear was silence on the other end. “Dave, I get it, I understand, please find the problem and fix our HPC cluster for us. ” I happily agreed to help out! I concluded our phone conversation asking that he send me the specific details, down to the nuts and bolts of the hardware! I also requested operating system and software that was installed and used on the 256 core HPC cluster.
What NOT to do when configuring an ANSYS Distributed HPC cluster.
Seeking that perfect balance!
After a quick NDA signing, a few dollars exchange and a sprinkle of some other legal things that lawyers get excited about. I set out to discover the cause. After reviewing the information provided to me I almost immediately saw three concerns:
1. The systems are interconnected with a series of wires. 2. The lessons are designed to show students how the two subjects interconnect 3. A series of interconnecting stories
First Known Use of INTERCONNECT: 1865
Concern numeral Uno!!! Interconnect me
Though the company’s 256 core HPC cluster had a second dedicated GigE interconnect. Distributed ANSYS is highly bandwidth and latency bound often requiring more bandwidth than a dedicated NIC (Network Interface Card) may provide. Yes, the dedicated second GigE card interconnect was much better than trying to use a single NIC for all of the network traffic which would also include the CPU interconnect. I did have a few of the MAPDL output files from the customer that I could take a peek at. After reviewing the customer output files it became fairly clear that interconnect communication speeds between the 16 core x 16 server in the cluster was not adequate. The master Message Parsing Interface (MPI) process that Distributed ANSYS uses requires a high amount of bandwidth and low latency for proper distributed scaling to the other processes. Theoretically the data bandwidth between cores solving local to the machine will be higher than the bandwidth traveling across the various interconnect methods (see below). ANSYS, Inc. recommends Infiniband for CPU interconnect traffic. Here are a couple of reasons why they recommend this. See how the theoretical data limits increase going from Gigabit Ethernet up to FDR Infiniband.
Theoretical lane bandwidth limits for:
Gigabit Ethernet (GigE): ~128MB/s
Signal Data Rate (SDR): ~ 328 MB/s
Double Data Rate (DDR): ~640 MB/s
Quad Data Rate (QDR): ~1,280 MB/s
Fourteen Data Rate (FRD): ~1,800 MB/s
GEEK CRED: A few years ago companies such as MELLANOX started aggregating the Infiniband channels. The typical aggregate modifiers are 4X or even a 12X increase. So for example the 4X QDR Infiniband switch and cards I use at PADT and recommended to this customer, would have a (4X 10Gbit/s) or 5,120 MB/s of throughput! Here is a quick video that I made of a MELLANOX IS5023 18-port 4X QDR full bi-directional switch in action:
This is how you do it with a CUBE HVPC! MAPDL output file from our CUBE HVPC w16i-GPU workstation. This is running the ANSYS industry benchmark V14sp-5. I wanted to show the communication speeds between the master MPI process and the other solver processes to see just how fast the solvers can communicate. With a peak communication speed of 9593 MB/s this CUBE HVPC workstation rocks!
4u standard depth or rackmountable
1 x One Dual Socket
INTEL 602 Chipset
2 x INTEL e5-2690 @ 2.9GHz
2 x 8
128GB DDR3-1600 ECC Reg RAM
2 x 2.5″ SATA III 256GB SSD Drives RAID 0
DATA/HOME Hard Disk Drives
4 x 3.5″ SAS2 600GB 15kRPM drives RAID 0
SAS RAID (Onboard, Optional)
RAID 0 (OS RAID)
SAS RAID (RAID card, Optional)
LSI 2208 (DATA VOL RAID)
Dual GigE (Intel i350)
NVIDIA QUADRO K5000
NVIDIA TESLA K2000
Windows 7 Professional 64-bit
Optional Installed Software
ANSYS 14.5 Release
Stats for CUBE HVPC Model Number : w16i-KGPU
Learn more about this and other CUBE HVPC systems here.
Concern #2: Using RAID 5 Array for Solving Disk Volume
The hard drives that are used for I/O during a solve, the solving volume, were configured in a RAID 5 hard disk array. Some sample data below showing the minimum write speed of a similar RAID 5 array. These are speeds that are better off seen in your long-term storage volume not on your solving/working directory.
HITACHI ULTASTAR 15K600
Qty / Type / Size / RAID
Qty 8 x 3.5″ SAS2 15k 600GB RAID 5
Concern #3: Using RAID 1 for Operating System
The hard drive array for the OS was configured in a RAID 1 configuration. For a number cruncher server having RAID 1 is not necessary. If you absolutely have to have RAID 1. Please spend the extra money and go to a RAID 10 configuration.
I really don’t want to get into the seemingly infinite details of hard drives speeds, latency. Or even begin to explain to you if I should be using an onboard RAID Controller, dedicated RAID controller or a software RAID configuration completed within the OS. There is so much information available on the web that a person gets overloaded. When it comes to Distributed ANSYS, think fast hard drives and fast RAID controllers. Start researching your hard drives and RAID controllers using the list provided below. Again, only as a suggestion! I have listed the drives in order based on a very scientific and nerdy method. If I saw a pile of hard drives, what hard drive would I reach for first?
I prefer using SEAGATE SAVVIO or HITACHI enterprise class drives. (Serial Attached SCSI) SAS2 6Gbit/s 3.5”15,000 RPM spindle drives (best bang for your dollar of space, more read & write heads over a 2.5” spindle hard drive).
I prefer using Micron or INTEL SSD enterprise class SSD. SATA III Solid State Drive 6 Gbit/s (SSD sizes have increased however you will need more of these for an effective solving array and they still are not cheap).
I prefer using the SEAGATE SAVVIO 2.5” enterprise class spindle drives. SAS2 6Gbit/s 2.5” 15,000 RPM spindle drives (if you need a small form factor, fast and additional storage. But the 2.5” drives do not have as many read & write heads as a 3.5” drive. In a situation where I need to slam 4 or 8 drives into a tight location. Right now, SEAGATE SAVVIO 2.5” are the way to go! Here is a link to a data sheet. Another similar option is the HITACHI ULTRASTAR 15k600. It’s spec sheet is here.
SATA II 3Gbit/s 3.5” 7,200 RPM spindle drives are also a good option. I prefer Western Digital RE4 1TB or 2TB drives. There spec sheet is here.
LSI 2108 RAID Controller and Hard Drive data/details:
How a CUBE HVPC System from PADT, Inc. balanced out this configuration and how much would it cost?
I quoted out the below items, installed and out the door (including my travel expenses, etc.) at: $30,601
The company ended up going with their own preferred hardware vendor. Understandable, one good thing is that we are now on the preferred purchasing supplier list. They were greatly appreciative of my consulting time and indicated that they will request a “must have” quote for a CUBE HVPC system the next refresh in a year. They want to go over 1,000 cores the next refresh.
I recommended that they install the following into the HPC cluster based: (note they already had blazing fast hard drives)
As we put the finishing touches on the latest 512 core CUBE HVPC cluster, PADT is happy to report that there are now 2,042 cores worth of High Value Performance Computing (HVPC) power out there in the form of PADT’s CUBE computer systems. That is 2,042 Intel or AMD cores crunching away in workstations, compute servers, and mini-clusters chugging on CFD, Explicit Dynamics, and good old fashioned structural models – producing more accurate results in less time for less cost.
When PADT started selling CUBE HVPC systems it was for a very simple reason: our customers wanted to buy more compute horsepower but they could not afford it within their existing budgets. They saw the systems we were using and asked if we could build one for them. We did. And now we have put together enough systems to get to 2,042 cores and over 9.5TB of RAM.
Our Latest Cluster is Ready to Ship
We just finished testing ANSYS, FLUENT, and HFSS on our latest build, a 512 core AMD based cluster. IT is a nice system:
512 2.5GHz AMD Opteron 6380 Processors: 16 cores per chip, 4 chips per node, 8 nodes
2,048 GB RAM, 256GB per node, 8 nodes
24 TB disk space – RAID0: 3TB per node, 8 nodes
16 Port 40Gbps Infiniband Switch (so they can connect to their older cluster as well)
All for well under $180,000.
It was so pretty that we took some time to take some nice images of it (click to see the full size):
And it sounded so awesome, that we took this video so everyone can here it spooling up on an FLUENT benchmark:
If that made you smile, you are a simulation geek!
Next we are building two 64 core compute servers, for another repeat customer, with an Infiniband switch to hook up to their two existing CUBE systems. This will get them to a 256 core cluster.
We will let you know when we get to 5000 cores out there!
Are you ready to step out of the box, and step into a CUBE? Contact us to get a quote for your next simulation workstation, compute server, or cluster.
There is a closet in the back of PADT’s product development lab. It does not store empty boxes, old files, or obsolete hardware. Within that closet is a monster. Not the sort of monster that scares little children at night. No, this is a monster that puts fear into the heart of those who try to paint high performance computing as a difficult and expensive task only to be undertaking by those who are in the priesthood. It makes salespeople who earn fat commissions by selling consulting services and unnecessary add-ons quake in fear.
This closet holds PADT’s latest upgrade to our compute infrastructure: a 512 core CUBE HVPC Cluster. No data center, no special consultants, no expensive add-ons. Just 512 cores chugging away at solving FLUENT and CFX problems, and pumping a large amount of heat up into the ceiling.
Here are the specifics:
CUBE C512 Columbia Class Cluster
512 AMD 2.4GHz Cores (in 8 nodes, 4 sockets per node, 16 cores per socket)
2TB RAM (256 GB per node of DDR3 1600 ECC RAM)
Raid Controller Card (1 per node)
24TB Data Disk Space (3TB per node of SAS2 15k drives in RAID0)
Infiniband (8 Port switch, 40 Gbps)
52 Port GIGE switch connected to 2 GIGE ports per node
42 U Rack with thermal convection ducting (chimney)
Keyboard, monitor, mouse in drawer
CENTOS (switching to RedHat soon)
We built this system with CFD simulation in mind. The original goal was to provide a proof of concept to expand our CUBE HVPC offering, showing that you can create a cluster of this size, with very good speed, for a price that small and medium sized companies can afford. We also needed a way to run large problems for benchmarks in support of our ANSYS sales efforts and to provide faster technical support our FLUENT and CFX customers. We already have a growing queue of benchmarks waiting to get into the machine.
The image above is the glamour shot. Here is what it looks like in the closet:
Keeping with our theme of High Value Performance Computing we stuck it into this closet that was built for telephone equipment and networking equipment back at the turn of the century when Motorola had this suite. We were able to fit a modern rack in next to an old rack that was in there. We then used the included duct to push the air up into our ceiling space and moved the A/C ducting to duct right into the front of the units. We did need to keep the flow going into the rack instead of into the area under the networking and telephone switches, so we used an old video game poster:
Anyone remember Ratchet and Clank? Best PS2 games ever.
It works well and adds a little color to the closet.
So far our testing has shown some great numbers. Not the fastest cluster out there, but if you look at the cost, it offers incredible performance. You could add a drive array over Infiniband, faster chips, and some redundant power. And it will run faster and more reliably, but it will cost much more. We are cheap so we like this solution.
Oh yea, with the parts from our old CFD cluster and some new bits, we will be building a smaller mini-cluster using INTEL chips, a GPU or two, and a ton of fast disk and RAM as our FEA cluster. Look for an update on that in a couple of months.
Interested in getting a cluster like this for you computing pleasure? A system configured like this one will run about $150,000 (video game poster is extra). Visit our CUBE page to learn more or just shoot an email to firstname.lastname@example.org. Don’t worry, we don’t sell these with sales people, someone from IT will get back with you.
We have a new rack installed in our compute server room (well closet really). I wonder what we can fill that with? Looks like it can handle a lot of heat, and a lot of units. We shall see what the week brings.
Picking a Server Rack Frame
Selecting a server rack frame could be the most important part of the designing phase. In order to assist with choosing a proper fit for your environment, here are 8 rack considerations to keep in mind:
What size Rack Cabinet Enclosure do I need?
Selecting the correct server cabinet size depends on 2 major factors: the type of equipment needed for rack mount capabilities and the amount of equipment requiring server rack enclosure space. The key to having a good server rack buying experience is planning. Ideally, users should tally the total amount of rack units currently needed and also keep in mind future expansion because cabinet rack units can not be added on once a server rack is fabricated. If additional rack mount accessories such as environmental monitoring, battery back-up, and/or remote power management are required, extra front and rear cabinet space might be needed in order to sufficiently mount rack accessories vertically and horizontally. At rackmountsales.com you can choose racks by size.
What is the significance of Internal Rack Cabinet Enclosure Dimensions? Internal Dimensions should be used as a guide to gauge the size and amount of equipment one can install in server rack enclosures. Internal vertical measurements from the tallest point of any side rail to the bottom chassis is regarded as total internal height. Internal depth is figured by measuring from the insides of both front and rear doors. Lastly, internal width measurements extend from one side panel to the other.
When accessing rack mount needs, internal dimension measurements should also take into consideration rack equipment and accessories that normally mount internally to the rear or side of cabinets. Additional space can be modified during rack manufacturing to allow for side, rear, and front mounted rack equipment. Additionally, the auxiliary compartment space will provide room for ventilation systems, bulky power cords and cabling management requirements.
What is the significance of External Rack Cabinet Enclosure Dimensions? Determining server rack location within a data center or co-location facility is often overlooked until the rack enclosure arrives at the dock for delivery. It is very crucial for users to determine if the finalized exact external dimensions of the server rack will fit through doorways and other obstructions of the intended target location. Consider carefully environmental factors such as ceiling height and clearance regulations in your data center or server farm. Also, be sure to respect dimensions of stairways and freight elevators if server racks need to be transported through them for final placement.
Will my Rack Cabinet Enclosure fit in the room it’s intended for?
Considerations such as server rack weight and height are very important factors to take into account when moving server racks from place to place. Particular server racks can weigh in at over 300 lbs. and can stand very tall at over 7 feet. Server racks are large items which require considerable effort when moving, rounding corners, lifting up stairs, and fitting in any tight spaces. Please ensure that enough room has been made and accounted for before rack enclosures are purchases and finalized.
Will the Rack Cabinet Enclosure fit through all doors on the way into the destination room?
All of our server rack enclosures ship fully assembled. There are some removable components, such as door as side panels, but that will not change external dimensions of the rack frame which cannot be taken apart. Please consider all product dimensions carefully to ensure server racks meet all clearance regulations.
What is a Rack Unit? What does 40U mean? 44U? 48U? etc.
A “Rack Unit” or Rack “U” is an EIA standard allowance unit for measuring rack mount equipment. One “Rack Unit” is equal to 1.75″ in height. To calculate internal useable space of a rack enclosure, simply multiply the total amount of Rack Units by 1.75″. For example, a44U rack enclosure would have 77″ of internal usable space (44 x 1.75). Click here to choose racks by height.
How do I calculate how many Rack Units I need?
Many data center managers calculate rack enclosure height needed by determining the optimal rack unit usage. For example, if users are aware that future plans call for the addition of 20 2U sized servers, they could count on needing a 44U rack enclosure. This will allow enough internal height for approximately 20 servers, room for a 1U patch panel and a 2U UPS back-up battery. Rear or side vertically mounted power management devices will also have sufficient room to perform their functions.
What is the purpose of a 2-Post Relay Rack?
A Relay Rack is a 2-post aluminum or steel structure with either EIA standard (round) mounting holes or universal (square) mounting holes. Relay Racks are also known as 2-Post Racks or Open Bay Racks. The vertical holes spacing on Relay Racks are standardized for mounting Telco, or computer / network equipment. Relay Racks can also mount cantilever shelving for other non-rack mountable equipment.
The Open Bay rack design also provides maximum air flow for the entire rack due to the open frame construction.
Universal Mounting Rails
EIA Standard 10/32 Tapped Mounting Rails
Rack Mount Rails: We can manufacture server rack enclosures with either Universal Mounting Rails (square holes fitted with cage nuts) or with EIA Standard rails (10/32 tapped holes). All our cabinet rails are high quality gauge steel (1/8″ thick or more) and have an electroplate finish to maximize protection.
Universal Mounting Rails: Universal rails will support 19″ EIA width rack mount and networking equipment and almost all sever equipment. Cage nuts and screws will be needed in order to mount equipment to universal mounting rails.
EIA Standard Mounting Rails: Standard Mounting Rails support 19″ EIA width rack mount and networking equipment and some sever manufacturer’s rack mounting equipment. Please be aware that not all rack mountable equipment will match up against the EIA 1032 hole pattern on Standard Rails. Standard mounting rails will not allow the use of Cage Nuts.
Which Mounting Rails do I need? It depends on the equipment you will be mounting in the rack enclosure. Most rack mount and networking equipment such as hubs, routers, patch panels, etc. will conform to EIA Standard hole spacing. However, some sever and rack accessory manufacturers will implement rack mounting kits to assist with attaching equipment to Universal Rails. With this example, proper cage nuts and screws will most likely be needed in order to mount this type of rack mount equipment in one of our server cabinets.
There are currently 3 types of Mounting Hardware used with our server cabinet rails:
10-32 Tapped Cage Nuts and Screws – American Version – Commonly used in all rack mount applications including music, video, broadcast, data and more. The “10” refers to the drill size for a tapped (threaded) hole. The outside diameter of a 10-32 screw is 0.19″, it is smaller than a 12-24 screw. This screw type has 32 threads per inch.
12-24 Tapped Screws – American Version – The “12” refers to the drill size required for a tapped (threaded) hole (a #12 drill is 0.189″). The outside diameter of a 12-24 screw is 0.2160″. It is larger than a 10-32 screw. This screw type has 24 threads per inch
M6 Tapped Cage Nuts and Screws – Metric Version – Metric thread size of 6 millimeters. Typical thread size for European rack applications. Also used in Compaq racks and Euro racks sold here in the US. Larger than both 10-32 and 12-24.
What mounting hardware do I need?
It depends on the mounting rails of the rack enclosure or relay rack you will be ordering. Most 4-Post server racks, cabinets, LAN enclosures either use Cage Nuts and Screws for square hole type Universal Mounting Rails or 1032 tapped screws for round hole style EIA Standard Mounting Rails. Please be aware that almost all 2-post open relay racks use 1032 Tapped Screws (round hole mounting rails).
It is a busy time in the world of CUBE computers. We are building our own new cluster, replacing a couple of older file servers we bought from “those other guys” and building a 128 core mini-cluster for a new CUBE customer. We ran out of room in the IT cubicle so we looked around and found that PADT’s clean room was not being used. A few tables and tools later and we had a mini-cluster assembly facility.
With the orders that customers have told us are on the way before the end of the year, this is going to be a busy area through December.