Data centers across the world are set to use 19 percent more energy this year than they did in 2012, and while colleges and universities look for environmentally-friendly alternatives to energy inefficiency, there could be two innovative ways to cool ever-churning server racks.
Schools have long experimented with ways to keep data centers from overheating without pumping air conditioning at its coldest temperatures all hours of the day – a problem faced by internet giants such as Google and Netflix that have adjusted to constantly increasing web usage worldwide.
A study published in the March 29 edition of the journal Science outlines two ways companies and colleges could cope with increasing user demand without ramping up traditional cooling mechanisms in expansive campus data centers. Diego Reforgiato Recupero, a computer scientist and electrical engineer at the University of Catania in Italy, said “smart standby” and dynamic frequency scaling could be key parts of the green effort to cool down server rooms.
Either approach would reduce the heat that must be neutralized in today’s data centers, according to the research.
“Smart standby” ensures that unused parts of data centers operate at the lowest energy levels, a sharp departure from data centers that use run their networks at peak levels so video streaming, for example, is available during spikes in traffic. Taking this approach would mean servers would have to ramp up when web demand increased, possibly leaving users frustrated by a slow internet connection.
Dynamic frequency scaling would allow a computer’s central processing unit to reduce its activity during down times in network traffic. Various data center equipment could also be programmed to keep tabs on heat output, maintaining a stable temperature without having to be manually turned off or on.