Knowledge and Design

With over 20 years of experience, the staff at Karis Technologies Inc. have been involved in dozens of build outs. Whether it’s a new greenfield site or expansion of an existing facility. Whether your data centre is conventional or high density, we are able to provide comprehensive infrastructure deployments from start to finish for:

  • Enterprise data centres
  • Medium size data centres
  • IDF/MDF locations
  • Remote office locations
  • Deployment labs

We offer a single source to organize your technology environment. With our large network of equipment and service providers, we will find the right solution for your project. Our services include:

  • On-site Survey
  • Project Consultation
  • Budgetary Quotes
  • CAD/Design
  • Needs Analysis
  • Power Evaluations
  • Climate / Environmental Analysis
  • Project Management
  • Full Implementation
  • Delivery and Installation

Introduction

In 1965, Gary Moore co-founder of Intel predicted that the number of transistors located on a square inch of integrated circuit would double every eighteen months. This is often referred to as Moore’s Law. This prediction has been relatively correct to date. The exponential increase in processor speed associated with the increase in transistors has also resulted in higher processor power consumption. This power consumption results in rejected heat. Computer equipment manufacturers have at the same time installed more processors in smaller and smaller physical space compounding the amount of rejected heat in a given area.

In order to keep the components within the data processing equipment from overheating, the manufacturers use external ambient air as the heat rejection medium. In most applications this cooling air flow is generated by internal fans forcing cooling air from front to back, side to top or side to back of the equipment case. The lower the cooling air temperature entering the equipment case, the greater the heat transfer from the components to the existing air.

Traditional racks or cabinets housing this data processing equipment using traditional data room cooling methods have, until recently, provided adequate cooling. However, as the amount of heat rejection from a standard rack increased 2 or 3 KW, cooling problems began to develop. This problem is particularly noticeable (and measurable) near the top of a standard rack when ambient room cooling is provided from a raised floor air plenum.

The Challenge

Late in the 1990′s cutting edge heavy users of technology began to express concerns with traditional cooling methods for rack mounted data processing equipment. Problems at first were anecdotal; failure rates for servers located near the top of the rack seemed greater. While still limited to relatively high end users, the problems with high density data processing equipment heat loads are now well documented. There has also been recognition that the computer cabinet can be an integral part of the solution to cooling the equipment.

As legacy equipment is replaced by the latest generation technology with the associated higher watt densities, more and more users are experiencing unacceptable temperature gradients within cabinets. In many cases the existing computer room air conditioning systems capacity is sufficient to cool the high density equipment but air flow issues prevent consistent cooling particularly higher in the rack. A method to provide adequate cooling to this density equipment is needed. The solution requires all the following attributes.

1 – Effectively Provide Constant Temperature Air Throughout the Rack, Bottom to Top
The air needs to be delivered to the air inlets of the equipment where it can be used to maximize heat transfer from the internal components. Any cooling air not dawn internally through the equipment will not effectively cool. The air temperature is ideally a few degrees above the new dew point, providing the lowest processor operating temperature without condensation. Cooling needs to be controlled and directed. Where there are blank spaces in racks or non heat generating equipment (i.e. patch panels) there is no need for cooling air. Further, certain equipment air inlets can be at the front while others have side air intakes.

2 – Work Within Existing Data Center Constraints So As Not To Cause Disruption To Existing IT Equipment
Manufacturers introduce new generations of technology equipment at rapid rates, with product cycles of 18 months to 2 years. Users however deploy this new technology at a slower rate. Constrained by budget limitations the new technology is used to replace obsolete systems in a more gradual methodology with a first in first out approach. Business use of existing equipment can not be interrupted while mew infrastructure is installed. Infrastructure improvements can be expensive and have long leave times. Often by the time they can funded, designed and constructed the data processing equipment they were intended to support has become obsolete. The risk to ongoing operations when installing new systems, equipment and interconnections is significant often with high financial impact for interruptions of business functions.

3 – Maximize Use Of Existing Air Conditioning Infrastructure
Often data centers develop unacceptable temperatures in high density areas even though air conditioning systems are not at capacity. Several factors can contribute to this, however, there can be many problems with existing raised floor plenum air distribution systems including bypass airflow, obstructed air flow, low static pressure and mixing of computer exhaust with with inlet air. Computer Room Air Conditioner(CRAC) units operate most efficiently with higher return air temperatures. Air bypassing the equipment inlets will return to the CRAC units at cooler temperatures effectively lowering the design capacity of these units. When cooling air is provided directly to the inlets of the data processing equipment, maximum heat transfer will occur and the CRAC units will operate at maximum efficiency and up to their full rated capacity.

4 – Eliminate Air Flow Restrictions Within the Cabinet
Unobstructed air flow to the inlets, through the equipment and return to the CRAC units is essential. Air brought into cabinets through cut outs in the floor is blocked by the lowest server in the cabinet and not only does not provide effective cooling but also reduces static pressure under the floor further reducing air flow through perforated tiles (increasing temperature gradients). Cabling systems must also be managed to allow free flow of exhausted hot air away from the equipment.

5 – Be of High Quality but Affordable and Simple To Implement For All Levels of Users of IT Equipment
Cabinets need to be flexible enough to be used with a wide range of equipment from different manufacturers. New cabinets for high density equipment need to be rolled into an existing facility and populated with equipment and perform the business function intended without expensive infrastructure improvements and associated delays.

6 – Cool Up to 10KW Per Cabinet with 70 Degree F Inlet Air and 30 Degree F Temperature Rise Through the Equipment
There is upper limit restriction where infrastructure needs limit the actually deployable density that can be reached in a data center This however does not limit the amount of power (therefore heat dissipated) consumed by individual cabinets. A cabinet with up to 10KW of load is high but not unseen in the industry. Properly utilized air conditioning systems can provide 70 degree air without risk of condensation in most applications. 70 degree air directed to computer equipment inlets will transfer enough heat to cause a 30 degree rise in the discharge air.

The Solution

The Karis Technologies Thermal Management is the only product available that meets all the challenges of cooling high density IT equipment presented above.

1 – Effectively Provide Constant Temperature Air Throughout the Rack, Bottom to Top
To solve the problem one must understand that the computer supply fans are not strong enough to pull air from the cold aisle through mesh doors to cool higher density enclosures. The cabinet must create a Usable Air Environment. That is, the right temperature at the right volume at the right velocity at the right pressure delivered at the air intakes of the servers. The patented air delivery system, delivers air through a pressurized plenum directly to the air intakes of servers. Utilizing a pressured plenum allows for control of the large quantity of air being delivered. Without that control, the lower servers in the rack would be starved for air. Further, the enclosure’s air intake location assures the air delivered to the servers comes from the coolest air in the room (near the floor). The air delivered is a constant temperature since there is no additional mixing with ambient room air; the solid acrylic cabinet door panel prevents this (and provides additional security over open mesh front door designs). Tests confirm that the top server in the rack is cooled with the same temperature air as the lowest server. The temperature of the air delivered is from above the floor reducing risk of condensation from the approximately 55 degree F air below the floor.

2 – Work Within Existing Data Center Constraints So As Not To Cause Disruption To Existing IT Equipment
The Thermal Management Enclosure requires no infrastructure changes to implement. There is no additional piping, duct work or floor modifications as some high density cabinet systems require. Static pressure of under floor air systems is not reduced by new openings in the floor. No additional water or refrigerants are introduced to the data processing environment, thereby reducing risks of leaks from these systems. It can be placed on an existing data center floor, configured with servers and be operational immediately.

3 – Maximize Use Of Existing Air Conditioning Infrastructure
Cooling air directed to the equipment intakes via the pressurized plenum prevents bypass air flow, maximizes heat transfer and returns the highest temperature air to the CRAC units allowing them to operate at design capacity. The Karis Thermal Management Enclosure does not add additional air conditioning capacity with its associated high capital and operating costs but makes efficient use of A/C unit capacity, whether in an existing or new facility.

4 – Eliminate Air Flow Restrictions Within the Cabinet
Air is delivered where it is required as the IT equipment manufacturer’s intended. There is no blockage of the air associated with under cabinet cut outs. Air is taken from the cold aisle and delivered directly to the equipment air intakes of individual racked components. The Thermal Management Enclosure also allows users to effectively manage cabling. Standard features include easily accessible vertical wiring troughs for inter-cabinet connections plus side panel knockouts for cabinet to cabinet connections with brush style air restrictors further reducing cooling air leakage to where it is not needed and inefficiently wasted. With no cabling blocking air flow from equipment exhaust, it operates cooler and reduces failure rates.

5 – Be of High Quality but Affordable and Simple To Implement For All Levels of Users of IT Equipment
Our latest innovation of the Thermal Management Enclosure adds to our long line of computer enclosure advancements in cable management, numbered rails for “U” identification, integral ground lugs, welded 1 gauge steel enclosure with the ability to support 2000 lbs of rack equipment. The Thermal Management Enclosure uses high quality long life fans. Air flow adjustment does not depend on complex and expensive variable frequency drives. It is simply the most effective and best value solution to implementing high density computer equipment loads in existing or new data center applications. It requires no infrastructure additions/modifications. The Thermal Management Enclosure can reduce operating costs by less equipment failures and improved performance of data room air conditioning systems. The simplicity built in allows Karis Technologies to compete very favorably with other high density solutions on cabinet cosy alone even discounting the infrastructure modifications required by other manufacturers’ enclosures and systems.

Conclusion

There is no need to convince the IT professionals, specifying engineers, critical system managers and dat center managers that their environments are changing and have been changing for several years. The need to address the higher density data center is real. Data centers tat were designed at 35 watts per square foot are obsolete and can no longer effectively cool the new generation of servers. Those data centers that were designed and built at 65 watts a square foot are the predominant facility in the country and strategies must be employed to enhance their performance. in doing so, owners will avoid substantial capital costs designing and building new facilities.

With so many companies offering products that will improve the performance of existing data center cooling systems, the end user has several options. These options vary in performance and cost. These options come with different levels of risk to the production environment. As informed decisions are made, the astute professional will weigh the following:

1 – Performance
Capacity of the existing cooling system.
Density of new and existing cabinets
The environmental requirements of existing gear and what the cooling requirements will be in the next 3 years.
The temperature of the cooling air. Will the air be too cold, such as taking it from directly under a raised floor and cause condensation inside servers.
The life expectancy of the new equipment
2 – Risk
The risk of jeopardizing business continuation with the installation of new equipment.
Will new system enhancements increase the potential for down time such as a water leak, refrigerant leak or lack of redundancy?
Environment modification such as cutting openings in raised floors, increasing the risk of equipment tipping over, floor tiles cut incorrectly, lessening the floor capacity.
3 – Maintenance
Can in-house personnel maintain the new cabinets or equipments?
Will experienced technicians be required to maintain refrigeration or chilled water equipment?
How much maintenance will be required forcing annual shutdowns?
Service agreements.
Additional service technicians working in the data center
Response time your company will get from a third party service providers in the event of an emergency.
4 – Cost
Upfront investment
Installation costs
Service agreements.
Routine maintenance
Financial impact in the event of a failed piece of cooling equipment or water leak.
The products available today vary drastically relating to the above categories. Companies have developed fan systems that mount to the doors of computer cabinets, trying to draw cool air through the servers only to realize the system does not eliminate the temperature gradients at the front of the servers but actually draw warm, mixed air through the servers instead.

Modular A/C units have been designed to sit on to of computer cabinets, delivering 55 degree air into the cabinet. While the cooling intent is realized, there is substantial risk and cost involved, not to mention performance. Putting refrigeration equipment on top of production equipment jeopardizes uptime in so many ways:

compressor failure,
wiring or relay problems,
refrigerant charges,
technicians climbing on top of gear,
redundancy
The cost is substantial. Lead time for parts and qualified servicemen all become paramount. 55 degree air delivered to the new generation servers has seen complaints of condensation on processor boards.

Chilled water loops are a new approach to enhance system performance. While cabinet mounted A/C units have a broad range of concerns, putting chilled water lines in a mission critical computer cabinet adds additional risk. While 55 degree air is delivered, the potential for water leaks are very real. Flex lines, hose fittings, valves… all have the potential for leaks. Then look at the water treatment program to ensure proper treatment to prevent pitting and scale buildup. Look at maintenance, service agreements costs and downtime to retrofit your system.

Fans mounted on the tops, bottoms and backs of cabinets, taking cold air from under a raised floor is another attempt to cool gear within computer cabinets. While the air flow patterns are confusing and questionable, temperature gradients are not eliminated. The goal is to deliver the same volume and temperature of cool air to the face of all servers housed in a cabinet. This approach also requires that raised floors be modified, cut out, and a cabinet roller over the opening. Air hits the the bottom server and is blown and or drawn to the top and back of the cabinet. The air is typically 55 degrees and can cause condensing within processing equipment. The cost is reasonable but performance is questionable.

All of the aforementioned approaches have benefits but they also have downsides. An approach that will deliver the coolest air in the room to the face of all the servers housed within a computer cabinet, have minimal or no maintenance required, does not require skilled third party service technicians, does not require raised floors to be modified, eliminates temperature gradients, has no risk to production, ensures redundancy, eliminates air mixing and is cost effective, does exist.

Karis Technologies has this solution with the ability to meet the data center needs of tomorrow.

Clients that employ Microsoft or Visio for equipment layout can contact Karis sales to receive shape files using Karis Products.