-
I have them stacked in the one rack I have. Never had any problems with it, so I never had any reason to space them out. I would imagine the biggest reason people would space them out is heat.
-
I have never skipped rack units between rackmount devices in a cabinet. If a manufacturer instructed me to skip U's between devices I would, but I've never seen such a recommendation.
I would expect that any device designed for rack mounting would exhaust its heat through either the front or rear panels. Some heat is going to be conducted through the rails and top and bottom of the chassis, but I would expect that to be very small in comparison to the radiation from the front and rear.
-
I usually leave a blank RU after around 5RU of servers (ie 5x1ru or 1x2ru + 1x3ru) and that would be dependent on the cooling setup in the data centre your in. If you have cooling done in front of the rack (ie a grate in front of the rack) the idea is that the cool air is pushed up from the floor and your servers suck the cool air through them. in this circumstance you would typically get better cooling by not leaving blank slots (ie use a blank RU cover. But if you have cooling done through the floor panel in your rack from my experience you get more efficient cooling by breaking up servers from being piled on top of each other for the entire rack
-
Every third, but that's due to management arms and the need to work around them rather than heat. The fact that those servers each have 6 Cat5 cables going to them doesn't help. We do make heavy use of blanking panels, and air-dams on top of the racks to prevent recirculation from the hot-aisle.
Also, one thing we have no lack of in our data-center is space. It was designed for expansion back when 7-10U servers were standard. Now that we've gone with rack-dense ESX clusters it is a ghost town in there.
-
If your servers use front to back flow-through cooling, as most rack mounted servers do, leaving gaps can actually hurt cooling. You don't want the cold air to have any way to get to the hot aisle except through the server itself. If you need to leave gaps (for power concerns, floor weight issues, etc) you should use blanking panels so air can't pass between the servers.
-
I don't skip Us. We rent and Us cost money.
No reason to for heat these days. All the cool air comes in the front, and out the back. There's no vent holes in the tops any more.
-
We have 3 1/2 racks worth of cluster nodes and their storage in a colocation facility. The only places we've skipped U's is where we need to route network cabling to the central rack where the core cluster switch is located. We can afford to do so space wise since the racks are already maxed out in terms of power, so it wouldn't be possible to cram more nodes in to them :)
These machines run 24/7 at 100% CPU, and some of them have up to 16 cores in a 1U box (4x quad core Xeons) and I've yet to see any negative effects of not leaving spaces between most of them.
So long as your equipment has a well designed air path I don't see why it would matter.
-
Don't leave space if you have cool air coming from the floor and also use blanks in unused u space. If you just have a low-tech cooling system using a standard a/c unit it is best to leave gaps to minimize hot spots when you have hot servers clumped together.
-
In our data center we do not leave gaps. We have cool air coming up from the floor and gaps cause airflow problems. If we do have a gap for some reason we cover it with a blank plate. Adding blank plates immediately made the tops of our cold aisles colder and our hot aisles hotter.
I don't think I have the data or graphs anymore but the difference was very clear as soon as we started making changes. Servers at the tops of the racks stopped overheating. We stopped cooking power supplies (which we were doing at a rate of about 1/week). I know the changes were started after our data center manager came back from a Sun green data center expo, where he sat in some seminars about cooling and the like. Prior to this we had been using gaps and partially filled racks and perforated tiles in the floor in front and behind the racks.
Even with the management arms in place eliminating gaps has worked out better. All our server internal temperatures everywhere in the room are now well within spec. This was not the case before we standardized our cable management and eliminated the gaps, and corrected our floor tile placement. We'd like to do more to direct the hot air back to the CRAC units, but we can't get funding yet.
-
No gaps, except where we've taken a server or something else out and not bothered to re-arrange. I think we're a bit smaller than many people here, with 2 racks that only have about 15 servers plus a few tape drives and switches and UPSes.
-
No gaps other than when planning for expanding san systems or things like that. We prefer to put new cabinets close to the actual controllers.
If you have proper cooling, leaving gaps will not be beneficial unless the server is poorly constructed.
-
Leaving gaps between servers can affect cooling. Many data centres operate suites on a 'hot aisle' 'cold aisle' basis.
If you leave gaps between servers then you can affect efficient airflow and cooling.
This article may be of interest:
Alternating Cold and Hot Aisles Provides More Reliable Cooling for Server Farms
-
I get the impression (perhaps wrongly) that it is a more popular practice in some telecoms environments where hot/cold aisles aren't so widely used.
It's not suited to a high density and well run datacentre though.
-
I have large gaps above my UPS (for installing a second battery in the futurue) and above my tape library (if I need another one). Other than that I dont have gaps, and I use panels to fill up empty spaces to preserve airflow.
-
Google is not leaving U between servers, and i guess they are concerned by heat management. Always interesting to watch how big players do the job.
Here is a video of one of their datacenter:
http://www.youtube.com/watch?v=zRwPSFpLX8I&feature=player_embedded
Go directly to 4:21 to see their servers.
-
In a situation where we had our own datacenter and space wasn't a problem, I used to skip a U (with a spacer to block airflow) between logical areas: web servers has one section, database, domain controller, e-mail, and file-server had another, and firewalls and routers had another. Switches and patch panels for outlying desktops were in their own rack.
I can remember exactly one occasion where I skipped a U for cooling reasons. This was an A/V cable TV looping solution in a high school, where there were three units that were each responsible for serving the cable TV system to one section of the building. After the top unit had to be replaced for the third time in two years due to overheating, I performed some "surgery" on the rack to make mounting holes so I could leave 1/2U of space between each of the three units (for a total of 1 U total space).
This did solve the problem. Needless to say this was thoroughly documented, and for extra good measure I taped a sheet to the top of one them in gaps explaining why things were the way they where.
There are two lessons here:
- Leaving a gap for cooling is only done in exception circumstances.
- Use a reputable case or server vendor. Be careful of buying equipment that tries to pack 2U worth of heat into 1U worth of space. This will be tempting, because the 1U system may appear to be much cheaper. And be careful of buying an off-brand case that hasn't adequately accounted for air-flow.
-
I wouldn't leave gaps between servers, but I will for things like LAN switches - this allows me to put some 1U cable management bars above and below... but it's definitely not done for cooling.