Home  5  Books  5  GBEzine  5  News  5  HelpDesk  5  Register  5  GreenBuilding.co.uk
Not signed in (Sign In)

Categories



Green Building Bible, Fourth Edition
Green Building Bible, fourth edition (both books)
These two books are the perfect starting place to help you get to grips with one of the most vitally important aspects of our society - our homes and living environment.

PLEASE NOTE: A download link for Volume 1 will be sent to you by email and Volume 2 will be sent to you by post as a book.

Buy individually or both books together. Delivery is free!


powered by Surfing Waves




Vanilla 1.0.3 is a product of Lussumo. More Information: Documentation, Community Support.

Welcome to new Forum Visitors
Join the forum now and benefit from discussions with thousands of other green building fans and discounts on Green Building Press publications: Apply now.




  1.  
    I'm involved in the process of getting a small data centre (currently ~50kW peak with requirement to be able to expand to around 100kW) refurbished where I work and it's starting to look like it might be a complete strip and restart process.

    As a result, I'm interested in seeing if we can come up with a solution that serves the needs of the room and users but minimises the install and ongoing running costs. The company is a large multi-national but with a reputation for being more environmentally conscious than all of our peers, so I think there would be some buy-in from the people that will have to sign it off.

    Can anyone point me towards some useful papers/websites that will help me understand the possible approaches?
    •  
      CommentAuthorDamonHD
    • CommentTimeOct 8th 2010
     
    There's been a largeish group of manufacturers in the US trying to make computing and data centres more green, plus it would probably be worth looking at Yahoo!'s recent Buffalo NY plans and Google's too.

    Eg here: http://www.theregister.co.uk/2010/09/23/yahoo_compute_coop/

    Rgds

    Damon
    •  
      CommentAuthorSteamyTea
    • CommentTimeOct 9th 2010 edited
     
  2.  
    In my last ’proper job’ I worked for a large blue coloured IT company where I was responsible for the data centre installations and machine room balance from the computing side of life. The data centre covered several thousand M2 for raised floor space. Then we were struggling to keep within 50W / M2, the design limits of the buildings, now DCs can be significantly above this. Having glanced though the links above it seems not much has changed. Little can be done to mitigate the power used by the equipment, this is in the hands of the research labs of the manufactures. How the equipment is used is a business decision of the firm running the DC, whether 24/7, closed (and powered off) over night or at weekends is usually out of the hands of those designing and running the DC. This leaves the support services (HVAC etc.) as the only area available for savings, the rest being out of the hands of the DC operations.
    The majority of the savings shown in the above links seem to be harnessing the ability to dump the heat generated by the DC to the outside without using almost as much power as used by the IT equipment to do so. The Yahoo site appears to use natural assisted convection to do this. The ability to do this will depend on the construction of the building and not easy to retrofit. (Note, yahoo produced a purpose designed building to do this, even if the concept of design and chimney effect was stolen from agriculture).
    The problem I faced in my previous life, where we were just beginning to look at green issues, albeit driven by cost reduction, was trying to find a use for the low grade heat produced by either the IT kit or HVAC. It seems that Yahoo have not cracked this one either as they are dumping it to air, a waste but no use for it. We went through several loops trying to find a use for this heat including piping it to an adjacent hotel to heat their swimming pool. (The pipe runs were too far to be viable). In the end we found no use for it and it remained a loss.
    To day life has moved on and heat pumps are much improved so depending on the location and surrounding demand it may be viable to upgrade the low grade heat from the DC to a level where it may be exported to heat the DC support offices if not already doing so or exported at a price (= sold) to adjacent organisations.
    It is very difficult to future proof DCs as business demands and IT advances produce too many variables, all you can do is project the trends.
    For the ongoing maintenance the power supplies, distribution boards and connection boxes were regularly photographed using thermal imaging and this readily showed up hot spots which were then scheduled for maintenance. The advent of this process saved considerable down time and maintenance costs over the previous methods of physical inspection which did not always pick up problems as accurately as the thermal imaging.
    Peter
    •  
      CommentAuthorDamonHD
    • CommentTimeOct 9th 2010 edited
     
    At my previous main client, a large UK bank, I had a meeting/chat with a couple of the people on that side.

    They have been virtualising servers to get utilisation up and thus average useful cycles per Watt and per m^2.

    The building I was in has no heating, only cooling, using its machine-room heat for the rest of the building in when required I think.

    And I've demonstrated to myself that if energy efficiency is a sufficiently high priority then reducing W per unit of useful work done by an order of magnitude is by no means impossible. (My server suite at home to run our main Web-facing services such as Web sites, mail, etc, is down from ~600W to ~4W excluding comms, with comms down from ~50W to ~8W. And that reduction also meant we could get rid of air-con in summer which probably added another 30% to annual consumption. It's also gone from taking several m^2 of racking to the corner of my desk and coming off-grid entirely!)

    Rgds

    Damon
  3.  
    * DamonHD wote
    The building I was in has no heating, only cooling, using its machine-room heat for the rest of the building in when required I think.

    That is usual in my experience. Data Centres usually produce more heat than can be used in the support offices that accompany them. The efficiency challenge is to find something useful to do with the surplus, instead of dumping it outside. Perhaps when technology reduces its power demands sufficently many of these buildings without heating other than data centers will find themselves with a problem in the winter.

    For your own situation, that sound like a generation change in the hardware to me. If that is the case then yes a new generation of kit will allow such changes but the accountants will rarely allow such expense on energy consumption alone, usually energy usage is 1 line item on a justification document. Well done for getting such a reduction and coming off grid. For the next jump you will probably be waiting for the research labs to put the whole lot on 1 IC pluggable into the processor that my not be bigger than the kit that sits in the corner of your desk today.
    Peter
    •  
      CommentAuthorDamonHD
    • CommentTimeOct 9th 2010
     
    In my case there have been two generations of replacement kit, but each paid for in less than one year's savings from the electricity bill of the original equipment, so an ROI that even a bean-counter could love.

    Rgds

    Damon

    PS. It is already fully solid-state with very few chips: http://www.earth.org.uk/note-on-SheevaPlug-setup.html
    •  
      CommentAuthorSteamyTea
    • CommentTimeOct 10th 2010
     
    If it is a new DC (bloody IT people people reducing everything to initials) locate it next to a 'customer for the heat', or at least a large heat sink like a lake. General comment that.

    As this is a refurbishment and probably next to, or in , the main building, can the heat be used anywhere. Otherwise it is the Damon Route (that will be DR from now on) of installing new IT equipment and reduce the load by a factor of 5. 10 kW instead of 50 kW is a lot more manageability.
    •  
      CommentAuthorDamonHD
    • CommentTimeOct 12th 2010
     
    • CommentAuthorSimonH
    • CommentTimeOct 13th 2010
     
    One of the big IT companies I've done work for in the past had a mention of their involvement of this on their intranet - http://www.thegreengrid.org/

    Seems like quite a useful resource with lots of reference material - of you've paid to be a member :-(
    •  
      CommentAuthorDamonHD
    • CommentTimeOct 18th 2010
     
    •  
      CommentAuthorSteamyTea
    • CommentTimeOct 18th 2010
     
    I counted 14 acronyms in that 497 word article, think I better go out!
  4.  
    14 in 497 -
    this is a lot less than you get in some of the posts on this forum!!!

    Peter
    •  
      CommentAuthordjh
    • CommentTimeApr 8th 2011
     
    • CommentAuthorwookey
    • CommentTimeApr 8th 2011
     
    There is enormous potential to reduce data centre power consumption, essentially by moving off x86 and onto ARM. Which is exactly what I and Damon have done at home. Datacentres are just starting on this road.

    Calxeda have some interesting kit (ARM-based servers) which will do things like reduce power-consumption per compute node by a factor of 10 and space consumption per compute node by a factor of 100. Now actually, that'll put your power copnsumption per square meter _up_ if you exploit the opportunity fully, but it clearly has the potential to make a big difference in datacentres. Current hardware is not suitable for all workloads, but more capable stuff will be along soon enough.

    So, my advice if you want to green-up your datacentre is start looking to see if you can use any of this kit.

    There is a big jeavons effect here though - people will just do more computing on all that kit now they suddenly have some spare power budget.

    (disclaimer: I have have recently become employed by ARM, but that doesn't change my opinions, it just means I can't tell you the _really_ cool stuff yet)

    We have a chunky datacentre here, but can't really use the waste heat from it because the building needs cooling nearly all the time, and very little heating. CHP to local housing (there is some right next door) would be an interesting idea for using that heat. ARM are looking at all this stuff for their recently-aquired building refurb. I find it interesting the way that the corporate situation is not at all like the domestic one, and quite different solutions are appropriate.
    •  
      CommentAuthorDamonHD
    • CommentTimeApr 8th 2011 edited
     
    And as you've been kind, wookey, I feel all contrary and point out that even sticking with 'conventional' x86 kit, using it better can make a big difference. For example, my new MacBook uses about half the power of the old, especially when idling (though a chunk of that improvement is the LED backlight). But when I simply consolidated several separate (SPARC) servers onto one (x86) laptop I went down from over 600W to about 20W; in business this is often being helped along by virtualisation so that each user still has the appearance of a dedicated resource/system but without the fixed power and physical space overheads.

    Lots and lots of scope for improvement.

    Rgds

    Damon
  5.  
    In a heating climate it is easy to use the waste heat from cooling server rooms with heat pump technology (as well as HRV).

    The company I currently work for use Mitsubishi VRV systems which have BC controllers to manage the refrigerant, the waste heat can be used to heat other areas or hot water cylinders.
    • CommentAuthorpmcc
    • CommentTimeApr 9th 2011
     
    Thanks for the tip wookey. It will be interesting to compare capital costs per unit of processing power (depends on type of apps) between ARM and Intel based servers. As you're on the inside have you a feel for that? And are you aware of any practical ARM-based servers available to buy today?
    •  
      CommentAuthorDamonHD
    • CommentTimeApr 9th 2011
     
    The main cost you may find is actually porting code, for example away from a "WIntel" environment and existing dev/support staff bad/comfortable habits.

    Rgds

    Damon
    • CommentAuthorwookey
    • CommentTimeApr 10th 2011
     
    I haven't the foggiest on actual datacentre costs. I only know about domestic and personal kit. There are plenty of practical arm-based servers for home use (most NAS boxes are ARM, for example, and some are capable home servers), but I'm really not sure if any of this stuff is actually available in datacentre form yet. It's quite new - expect plenty of change in this area over the next 2 years.

    If your stuff is running on linux servers then there is minimal porting issue. Debian has been available on ARM since 2000, for example, and most code with either 'just work', or need a rebuild for ARM. Most of the things that made moving code to ARM difficult have been consigned to history: (i.e. unsigned char default, different behaviour of unaligned loads, lack of FP).

    Obviously any proprietary code you use can be a problem if the suppliers don't make a suitable ARM build. (Don't use proprietary code would be my advice :-)

    Microsoft announced ARM support for Windows in November which does mean people will be able to use Windows on their ARM kit soon. (Do people still use Windows in datacentres?)
    •  
      CommentAuthorDamonHD
    • CommentTimeApr 11th 2011 edited
     
    Wookey, yes, people do still use Windows stuff for servers. Slightly less lunatic than it used to be, but still...

    Note also for the Linux/ARM issue: I agree with most of what you say, but if you want to run (say) .Net/mono or Java (as I do) then you have to find a non-stock runtime, and indeed for any other 3rd-party stuff you have to find a distro that builds what you need unless you want to spend your life rebuilding and debugging. For example Ubuntu abandoned the ARMv5 in the SheevaPlug meaning that I'd now have to change distros to do anything beyond a kernel upgrade.

    So ARM remains slightly more painful/expensive from that point of view even for code like mine which is completely architecture neutral. (Shell scripts and Java.)

    Rgds

    Damon
    • CommentAuthorwookey
    • CommentTimeApr 16th 2011
     
    Damon - Debian armel supports v5, and has mono and java built (along with 17,000 other packages). Ideal for sheevaplugs. Ubuntu (and Linaro) have both chosen to support only v7 or later, which is ignoring an awful lot of v5 and v6 machines. For those you now pretty much get Debian or Debian, which is fine by me, as that's what I've been running on everything (arm or x86) for the last 12 years anyway. It's a great choice for anybody for servers, NAS and plug computers anyway (one does wonder why they even have an 'Ubuntu server' as I can't think of any reason one would want Ubuntu rather than Debian for a server).

    I haven't actually tried cross-grading a sheevaplug, but it should work, no doubt with some tiresome fiddling about. Or you can just install from scratch painlessly: http://www.cyrius.com/debian/kirkwood/sheevaplug/plugs.html
  6.  
    Simple solution, dunk them in water...

    http://feedproxy.google.com/~r/treehuggersite/~3/GKmuMwQmbSc/9-percent-data-center-cooling-energy-reduction-fluid-submerged-servers-mineral-oil.php

    ;) ... must be tricky to fish them out when only a hard-reboot will do ...
    •  
      CommentAuthordjh
    • CommentTimeApr 19th 2011
     
    Posted By: mybarnconversionSimple solution, dunk them ... must be tricky to fish them out when only a hard-reboot will do

    That's a great system. :peace:

    No need to fish them out. Is it Google or Sun that puts servers in shipping containers? They never open it in service, just swap the whole container out when too many of the servers have failed. So should be ideal to fill the container with oil.
Add your comments

    Username Password
  • Format comments as
 
   
The Ecobuilding Buzz
Site Map    |   Home    |   View Cart    |   Pressroom   |   Business   |   Links   
Logout    

© Green Building Press