The data center industry has been able to effectively manage increasing demands and ensure sustainable growth. Embracing a holistic approach is key for moving beyond the conventional siloed view and enhancing our understanding of data centers for a multi-objective optimization roadmap.
This article explores trends disrupting data centers in 2025, highlighting the dynamic and evolving landscape of the data center sector. A helicopter view of energy sources and consumption is presented before diving into the key trends.
Energy
Energy has always been and remains a cornerstone of our progress. Over the next few years, energy production is expected to continue to grow, fueled by economic growth and rising demand. In 2023, the global primary energy supply hit approximately 620EJ or 172,000TWh, with a heavy reliance on oil, coal, and natural gas, which combined accounted for about 81 percent, while renewables comprised about 15 percent, and nuclear power four percent.
Electricity is a secondary energy source generated from primary energy sources. In 2023, global electricity consumption reached a record level close to 108EJ or 30,000TWh, highly dependent on coal, renewables, natural gas, and nuclear. In 2024 and 2025, the electricity demand is expected to grow at a faster rate of three to four percent year-on-year, also fueled by global electrification.
Data centers have significantly contributed to rising electricity demand in many regions. In 2022, data centers and cryptocurrency consumed approximately 460TWh globally, accounting for about two percent of the world's electricity demand. This consumption is projected to exceed 1,000TWh by 2026. While global data center electricity consumption has increased marginally, some countries with expanding data center markets are experiencing swift growth.
Skyrocketing compute-intensive workloads, new approaches to satisfy higher power requirements, liquid cooling adoption, and efforts in sustainability and efficiency, are anticipated to be key trends that will continue to disrupt data centers in 2025. It is important to recognize, however, that there is no one-size-fits-all solution to tackle these challenges.
Compute-intensive workloads
Data center workloads have been profoundly impacted by the recent explosion of compute-intensive workloads including HPC, AI, and generative AI. These workloads are boosting the production of new IT equipment, while also transforming the data center landscape by increasing the number of AI-ready or dedicated AI data centers.
The diversity and evolution of compute-intensive workloads pose new challenges. To accommodate new IT equipment with higher thermal design power ('superchips' exceeding 1kW) and higher rack power densities (50+kW, 100+kW, 300+kW per rack), data centers must embrace more efficient cooling and power solutions.
As an example of generative AI workloads, OpenAI's ChatGPT-2 model, released in 2019, ranged from 117 million to 1.5 billion parameters. GPT-3, released in 2020, contained 175 billion parameters, while GPT-4, introduced in 2023, is estimated to have around 500 billion parameters. The number of parameters indicates the models' learning and text-generation capacities.
Typically, the larger the model, the more sophisticated its understanding and generative abilities, meaning a significantly higher demand for computational resources to train and operate. We need to keep in mind that these are new workloads that have never been processed, and the applications are just starting to flood the market.
Dedicated AI data centers are increasingly becoming a reality, offering optimized compute, network, and power densities to process new compute-intensive workloads while meeting goals of efficiency, reliability, scalability, security, and sustainability. The power density is much higher compared to traditional data centers, but because workloads and applications are increasing exponentially, facilities are also growing in size. By 2026, dedicated AI data centers are expected to consume between 100 and 300 TWh.
Nvidia is leading and dominating the AI chip market. However, the field is ripe with opportunities, as competitors introduce their own solutions. These range from startups to companies like Google, Microsoft, Amazon, and Meta, to chip designers like Intel, AMD, Broadcom, Ampere, and Cerebras.
The energy consumed in a data center is correlated to the workload processed. Accurately quantifying the energy consumption of generative AI, such as a ChatGPT query, is difficult due to various factors, including model size and complexity, infrastructure in use, and optimization techniques applied. Trained on vast text data, it understands and produces human-like text. We could argue that the energy consumed for a ChatGPT query could range from one to 10Wh. Assuming 4.5Wh as an average, that’s roughly 15 times the energy consumed by a standard Google search, estimated at 0.3Wh. Notably, the industry is actively working to improve the energy efficiency of AI systems.
Data center location, power availability, and power requirements are key variables to consider when strategizing on technology selection. A decision needs to be made on whether to pursue new construction or to retrofit existing facilities. Existing data centers may retrofit parts of their facility to support AI workloads and since AI training workloads are not latency-sensitive, they can be processed in facilities in areas with lower costs. On the other hand, low latency, reliability, and scalability are critical for processing AI inference workloads, so preferred locations may involve higher costs.
It is not surprising that new data center developments, including dedicated AI data centers, are scaling up to hundreds of megawatts, propelling the electricity demand for new projects into the gigawatt range. These substantial power requirements present challenges but opportunities are also emerging in the form of various solutions tailored to meet these needs, such as microgrids (distributed energy resources), energy storage systems, UPS with grid interaction capabilities, turbines, gensets, fuel cells, nuclear, and renewable energy sources.
Top players in the data center industry, including Schneider Electric, Vertiv, Eaton, ABB, and Huawei, are already offering innovative solutions to address the growing power demands of data centers. In the realm of specific technologies numerous firms that specialize in these areas are at the forefront of the industry.
As data centers handle more compute-intensive workloads, their heat transfer requirements become more stringent due to the higher thermal design power and higher power densities. The thermal behavior is influenced by the power demand of the IT equipment, contingent on the processed workload. Liquid cooling has emerged as the main solution to manage the heat transfer of new IT equipment while improving operational efficiency and reducing energy consumption. The table below shows the main drivers and challenges for data center liquid cooling.
Drivers and Challenges
Different approaches for liquid cooling have been successfully tested, with single-phase direct-to-chip emerging as a frontrunner, facilitating hybrid solutions that combine air and liquid cooling. Concurrently, various technologies are maturing with no clear leader in the foreseeable future, such as traditional cold plates, microfluidic microchannels, micro-convective, or other approaches. There are also positive and negative pressure systems, single-phase and two-phase cooling, immersion, spray, a combination of cold plate and immersion, or entirely novel methods. In the next few years, it is expected that most data centers will at least partially implement some form of liquid cooling technology.
Governments have adopted an active role in promoting innovative technologies. As an example, the US Department of Energy’s ARPA-E Coolerchips initiative aims ‘to reduce total cooling energy expenditure to less than five percent of a typical data center’s IT load at any time and any US location for a high-density compute system.’ This initiative is specifically supporting the development of disruptive liquid cooling solutions.
The liquid cooling market is being propelled by numerous companies, each offering its own innovative solutions. From top vendors in the data center cooling space including Vertiv, Schneider Electric, Trane, Stulz, and Johnson Controls, to liquid cooling niche companies such as Accelsius, Asperitas, Chilldyne, CoolIT Systems, GRC, Iceotope, Jetcool, LiquidStack, Mara, Quantas, Submer, and Zutacore. Additionally, companies like Dell, HPE, Gigabyte, Huawei, IBM, Inspur, Lenovo, Sugon, Supermicro, and Wiwynn are providing liquid-cooled IT equipment solutions directly to final users.
The data center industry's actions and commitments, coupled with innovative technologies and supportive government policies, are vital for driving continuous advancements in efficiency and sustainability. As the drive for decarbonization gains momentum, AI has emerged as a pivotal player in transitioning towards a low-emission or net-zero future, with the potential to bolster sustainability efforts and decrease greenhouse gas (GHG) emissions. However, the challenge of tackling sustainability and climate change is hampered by the fragmented collection and utilization of information.
AI offers a revolutionary approach by not just processing, aggregating, and analyzing vast datasets but also optimizing complex systems with remarkable efficiency to improve forecasting. For instance, Google is pushing for more energy-efficient computing infrastructure and identifying practices to significantly reduce the energy required to train AI models. Trillium, their sixth-generation Tensor Processing Unit (TPU), is more than 67 percent more energy-efficient than the previous generation, the TPU v5e. 1. In 2023, Google’s average annual PUE was 1.10, and 100 percent of its annual electricity consumption has been matched with renewable energy since 2017.
Google is leveraging AI models to reduce GHG emissions, including a fuel-efficient routing model, considering traffic, terrain, and a vehicle’s engine; a hydrological model to predict floods up to seven days in advance; and a traffic model to optimize the timing of traffic lights reducing stop-and-go traffic and fuel consumption. Google aims to achieve net-zero emissions across its operations and value chain by 2030.
While maintaining optimism about AI's potential to drive a positive impact on optimization and performance, we need to be realistic about the environmental footprint and the collaborative effort needed to navigate this fast-evolving landscape. Responsible management of AI's resource consumption is crucial. Our grasp of its present demands is clear, yet its future path is still uncertain.
Integrating modularity with pre-engineered and prefabricated infrastructure -Adopting environmentally friendly materials and technologies
Sustainability reporting is set to become a best practice, complete with specific sustainability and efficiency metrics. Stakeholders are increasingly asking for more transparency about sustainability practices to cut down greenhouse gas emissions, while also embracing more resource-efficient approaches, viewing it as a competitive advantage.
Moises Levy, Ph.D., is managing director of Research and Market Intelligence at Datacenter Dynamics
© 2025 IPPMEDIA.COM. ALL RIGHTS RESERVED