CRC Energy Efficiency Scheme

A look at how to meet your energy efficiency requirements with cutting edge technology by Michael Cook of USystems
Carbon Footprint - A cash windfall not a burden
Financial directors can earn a cash windfall or, quite possibly, an income stream by utilising energy-saving datacentre design options – the energy saving can be turned into carbon credits and sold on the carbon trading system.
Datacentres are potentially in a lose-lose situation as ever more data is stored from greater use of computers around the world.  In keeping up with this growth, and the consequent rise in energy consumption, their owners will also be faced with a carbon tax as governments implement national green strategies.
But by measuring the environmental impact of your datacentre in order to establish its energy consumption and, by deduction, its carbon emissions, you can create a win-win situation for your datacentre.
From PUE to CRC
PUE (power usage effectiveness) has been widely adopted to measure the ratio of the total power consumed by a datacentre to the specific power consumed by the computing equipment populating the facility.
This evaluates how efficiently the energy is used and shows how much the power consumed by support functions, e.g. cooling and power distribution, is wasted.
The ideal PUE is 1.0, where 100 per cent of the power consumed by a datacentre is for applications and zero per cent is for lighting and cooling.  Most datacentres are running with a PUE of over 2.0, indicating that for every watt of IT power, an additional watt is consumed for ancillary stuff – largely for running the cooling system.
This is an obvious waste of energy and therefore, carbon emissions, which are at the heart of the Government’s carbon reduction commitment (CRC) energy efficiency scheme.
The Coldlogik Connection
Whether it be a new build or legacy site, there is one simple fact about datacentres – traditional CRAC cooling and containment cooling is expensive to run and outdated: its place can be taken by state-of-the-art UK designed and manufactured ColdLogik technology, which gives a PUE as close to 1.0 as is practical, with the twin benefits of far lower energy costs and CO2 emissions – plus other savings.
In addition, this cutting edge technology addresses all the key drivers which datacentre communication rooms are measured: energy consumption, carbon footprint, modularity, packaging densities, capital cost and redundancy – factors applying to any cluster, whether low or high density.
Energy Consumption
Take a traditionally designed datacentre, perhaps deploying a conventional hot-aisle / cold-aisle configuration on a raised floor.  It will typically use 40 kW of cooling for every 100 kW of data power – even aisle containment will only reduce this power ratio by a small percentage.
In comparison, ColdLogik will use less than 4 kW of power for the same 100 kW load – achieving in excess of a staggering 90 per cent year on year energy saving and, therefore, a reduction in carbon emissions.
Modularity
An ability to grow a datacentre and communications room in a logical and cost effective manner makes obvious business sense – only deploying what is needed for today while being prepared for rapid future expansion.  ColdLogik is modular and will allow you to upgrade with masses of flexibility and future proofing.
Over time as datacentre and comms rooms expand or sometimes contract and where third party racks might have to be accommodated, the cooling solution has to meet these ever changing demands.  Again, the ColdLogik system, with its retrofit capability to any populated OEM rack, comes into its own.
Packaging Density
Despite improvements made by the majority of manufacturers to the efficiency of their hardware and the promise of more to come, the fact remains that electronic packaging densities will continue to rise.  Hot spots in datacentres and comms rooms are common place and when considering that 55 per cent of electronic failure is due to temperature, it’s no surprise that a lot of effort has been deployed in this area to resolve these issues.
By design, the ColdLogik solution eliminates individual rack and localised hot spots, maintaining a constant, room-ambient temperature.  In fact, the enhanced efficiency of the ColdLogik system allows up to an unrivalled 45 kW to be removed from an industry standard 600 mm wide rack, thus enabling electronic packaging densities to increase substantially.
Such an increase in rack density enables a reduction in the number of racks required within a single room, negating the need for additional expensive floor space.
Capital Cost
Despite all of the benefits that a ColdLogik system brings, it doesn’t follow that this high-tech solution costs more than existing, outdated, cooling designs.
Not having to deploy a raised floor or a ceiling plenum are considerable savings in themselves – but the combination of modularity of design and increased packaging density means that the ColdLogik system will actually cost less, not more, than the standard enclosed aisle / in-row cooling design.
Redundancy
In spite of the critical nature of what datacentres and communication rooms provide, it is crucial that there is affordable redundancy built in – even in its basic form, the ColdLogik solution incorporates high levels of redundancy.
Summary
I have briefly, but specifically, covered the major benefits of the USystems ColdLogik system, which is potentially set to become the new standard by which state-of-the-art efficient datacentre design is measured..
To many, it is possible that the ColdLogik system is relatively new technology – it is, though, a tried and tested system, not just in the UK but all over Europe, USA, India and Africa.
Want to find out more? A white paper recently written by a consultant about a ColdLogik project, details the capital cost savings when compared to CRAC cooling and in-row containment cooling; it also highlights the energy savings from the commensurate reduction in CO2 outputs from the local power station.  It is available from http://www.usystems.co.uk/coldlogik/whitepaper