Hydronic Plant Connect: The It Cooling System Manager For Your Data Center
Hydronic Plant Connect: The It Cooling System Manager For Your Data Center
Hydronic
Plant
Connect
THE IT COOLING SYSTEM MANAGER
FOR YOUR DATA CENTER
melcohit.com
HPC
Hydronic Plant Connect
Data Centers are designed to be fully operational all year long and to
provide maximum dependability.
Those drivers are applied to all the devices that make up a Data Center and
therefore to the cooling equipment.
Keeping the cooling system performing at its best is a key factor in modern
Data Centers; cooling solutions need to be reliable and save energy.
Therefore, Mitsubishi Electric is pleased to announce HPC, the
new fully integrated optimizer for hydronic Data Center systems.
Fully developed in-house, HPC perfectly matches the need for cooling,
reliability, and energy savings, guaranteeing excellent performances while
fully respecting the required IT cooling demands.
2/3
2/3
Thanks to the collaboration between research centers in Italy and Japan, and the result of the
deep-seated know-how of the Mitsubishi Electric Group, the new data center management
software can be combined with any IT Cooling chillers and chilled water precision air
conditioners.
Moreover HPC perfectly suits future expansions thanks to its plug & play infrastructure and
operating logics, making it the best choice for your scalable Data Center.
COMPLETELY REDUCED
AUTOADAPTIVE OPERATING COSTS
HPC: HYDRONIC PLANT CONNECT
4/5
4/5
Thanks to its plug & play philosophy, HPC is prefect for those Data Centers that are built to be fully
occupied gradually, in different phases, making it the ideal solution for your business development.
HPC HOW IT
Hydronic Plant Connect WORKS
The HPC control logics enhance the system efficiency leveraging on partial loads,
redundant units, and favourable ambient conditions.
HPC bases its operation on proprietary logics and devices:
HPC INFRASTRUCTURE
6/7
6/7
HPC Logics
HPC manages chillers, CRAHs, and pumps, optimizing the entire chilled water system. Starting from the
operating conditions of each single component, HPC adjusts the working parameters to maximize the
overall efficiency. Optimization is always done with cooling dependability in mind, making HPC safe to use
in any type of conditions the data center faces.
1 Reset
IT cooling load satisfaction is paramount. 2 Reduction
HPC always gives priority to cooling dependability.
3 Optimization on
Therefore, actions are taken on the basis of the indoor unit groups’ status.
4 No action
There are 4 operating modes:
HPC acts on time intervals and the main variables taken into consideration are:
Pumps’ speed
HPC relies on proprietary and chiller integrated LAN Multi Manager logics.
DYNAMIC MASTER
LOAD MANAGEMENT
Cooling loads are smartly managed according to the data center’s needs
1. DISTRIBUTION
2. SATURATION
8/9
8/9
HPC Multi Manager not only manages the correct operation of the chillers, but it also
controls pumps and auxiliary input management.
Pump controls are available both for individual and centralized pump group configurations
(on/off, VPF, 2PS, etc.)
Auxiliary inputs are applied at a group level (group set-point adjustment, group demand limit, etc)
INDOOR LAN GROUPS LAN FUNCTIONS
HPC works on the basis of proprietary and CRAH unit integrated LAN logics
DYNAMIC MASTER
The rotation of the stand-by units can be automatically managed according to specific time bands,
alarms, and cooling load variations.
In the event of a unit breakdown or disconnection from the LAN, stand-by units are forced to activate.
10 / 11
10 / 11
parameters. 70%
start up if necessary.
Instead of running a few units close to their maximum load, it distributes the thermal load required
among all units making them work at partial loads thus increasing efficiency. In case of a unit failure,
its cooling load is shared among other operating units thus increasing the system’s reliability.
The LAN connection can be exploited to The LAN connection allows the units to
manage units according to the average automatically adjust the fans’ speed according
humidity and temperature values, making all to average pressures read by each unit.
units work at uniform conditions. High and low pressure spots are monitored and
Hot and cold spots are monitored and locally/ locally/automatically managed by each single
automatically managed by each single unit. unit.
KIPlink: LOCAL AND REMOTE
MONITORING FUNCTIONS
4 KIPLAN
Proprietary protocol communication
between all Mitsubishi Electric Hydronics
DATA LOGGER FUNCTION & IT Cooling Systems units.
12 / 13
12 / 13
Infrastructure – KIPlink
The unit is controlled locally by means of an Full access and control to the unit via Wi-Fi,
ethernet connection. Full access and control thanks to the Mitsubishi Electric Hydronics
of the unit is done via WEB browser. & IT Cooling Systems app.
Ethernet
IT COOLING
PROJECT
PLANT CONFIGURATION
The plant consists of 3 free cooling chillers (one redundant) and 11 CRAHs (one redundant). Each chiller has been equipped
with a pump controlled by VPF logics. The chiller choosen is a free cooling unit equipped with scroll compressors, an optimum
solution for the climate and the Data Center size.
The CRAH unit selected fulfills the requirements for capacity and unit number, at the same time providing good performances.
14 / 15
14 / 15
Hours [h]
8
EER
5000
300
6 4000
200 3000
4
2000
2 100
1000
0 0 0
-6 -3 0 3 6 9 12 15 18 21 24 27 30 -6 -3 0 3 6 9 12 15 18 21 24 27 30
Outdoor temperature [°C] Outdoor temperature [°C]
Results
Energy
Consumption -11.3%
The results obtained comparing HPC and CRAH fan and slightly increases pump consumption.
regulation. The overall amount of energy saved by HPC The advanteges of HPC control logics are particular
logics is significant. HPC reduces the energy consumption by impressive in mid-to-low temperatures where the free cooling
11.3%. The main differences involves the CRAH units and the technology is exploited at its maximum.
pumps.
Moreover, payback values show a really interesting return on
Instead of wasting energy in the two-way valve, HPC acts on investment of 10 months making HPC a really remarkable
the setpoint, reduces the energy absorbed by the indoor unit solution for optimizing Data Centers’ cooling systems.
17,0%
Pumps 1,6 €
800.000
Cumulative costs [M€]
46,9% 1,4 €
Chillers (w/o pumps)
600.000 1,2 €
52,9%
Chillers (w/o pumps) 1,0 €
400.000
0,1 €
40,3% 0,6 €
200.000
Indoor units 30,2%
Indoor units 0,4 €
0 0,2 €
CRAH fan regulation HPC 0 1 2 3 4 5 6 7 8 9 10
Years
AT A GLANCE
Tel (+39) 0424 509 500 - Fax (+39) 0424 509 509