0% found this document useful (0 votes)
67 views16 pages

Hydronic Plant Connect: The It Cooling System Manager For Your Data Center

The document describes HPC, a new fully integrated optimizer for hydronic data center cooling systems developed by Mitsubishi Electric. HPC allows users to easily control and monitor cooling equipment from a mobile device to maximize reliability and energy efficiency. It integrates outdoor chillers and indoor chilled water units to optimize their operation as a synchronized system. HPC's auto-adaptive algorithms analyze operating conditions and regulate parameters to ensure equipment works together seamlessly. This provides scalability and reduces operating costs for data centers.

Uploaded by

kunkz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views16 pages

Hydronic Plant Connect: The It Cooling System Manager For Your Data Center

The document describes HPC, a new fully integrated optimizer for hydronic data center cooling systems developed by Mitsubishi Electric. HPC allows users to easily control and monitor cooling equipment from a mobile device to maximize reliability and energy efficiency. It integrates outdoor chillers and indoor chilled water units to optimize their operation as a synchronized system. HPC's auto-adaptive algorithms analyze operating conditions and regulate parameters to ensure equipment works together seamlessly. This provides scalability and reduces operating costs for data centers.

Uploaded by

kunkz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

HPC

Hydronic
Plant
Connect
THE IT COOLING SYSTEM MANAGER
FOR YOUR DATA CENTER

melcohit.com
HPC
Hydronic Plant Connect

Data Centers are designed to be fully operational all year long and to
provide maximum dependability.
Those drivers are applied to all the devices that make up a Data Center and
therefore to the cooling equipment.

Keeping the cooling system performing at its best is a key factor in modern
Data Centers; cooling solutions need to be reliable and save energy.
Therefore, Mitsubishi Electric is pleased to announce HPC, the
new fully integrated optimizer for hydronic Data Center systems.

Fully developed in-house, HPC perfectly matches the need for cooling,
reliability, and energy savings, guaranteeing excellent performances while
fully respecting the required IT cooling demands.

EASY CONTROL SCALABLE


FROM YOUR DEVICE SYSTEM

2/3
2/3

The cooling equipment works


together as one system
HPC constantly analyzes the operating conditions of the system and regulates the operational
parameters so that the internal and external units perform at their best, in full synergy and
complete reliability.

Thanks to the collaboration between research centers in Italy and Japan, and the result of the
deep-seated know-how of the Mitsubishi Electric Group, the new data center management
software can be combined with any IT Cooling chillers and chilled water precision air
conditioners.

Moreover HPC perfectly suits future expansions thanks to its plug & play infrastructure and
operating logics, making it the best choice for your scalable Data Center.

COMPLETELY REDUCED
AUTOADAPTIVE OPERATING COSTS
HPC: HYDRONIC PLANT CONNECT

ONE NETWORK TO CONNECT


CHILLERS WITH INDOOR UNITS LAN FUNCTIONS

Thanks to its advanced algorithm, outdoor


chillers (air cooled, free cooling, or water
cooled) and indoor chilled water units are
managed to optimize their operation and
enhance the system’s efficiency in any
condition.

CONTROL YOUR COOLING PLANT


DIRECTLY FROM YOUR DEVICE

Accessibility to the units is possible directly


from your mobile device, via Local Area
Network or via VPN network.

COMPLETELY HARNESS THE FULL POTENTIAL OF


AUTOADAPTIVE ACTIVELY REDUNDANT SYSTEMS N+1
HPC is based on an autoadaptive HPC performs best in data centers featuring
algorithm that instantly detects and one or more redundant units.
analyzes the operating conditions of This advanced logic optimizes the plant instantly
the plant, and consequently optimizes in part load conditions, increasing efficiency.
it to perform at its best.

4/5
4/5

The must-have tool for today and


tomorrow’s chilled water data centers.

A UNIQUE AND FULLY


INTEGRATED SOLUTION BASED
ON PROPRIETARY LOGICS

Based on proprietary logics, HPC completes the cooling


package and connects both Mitsubishi Electric indoor and
outdoor units in order to reach the highest efficiency values
with no need for any external devices.

REDUCED OPERATING COSTS

With extreme precision HPC regulates not only chillers


and CRAHs but also the main components of the
hydraulic system such as pumps and valves.
FREE COOLING
In particular, HPC shows its greatest benefits
with free cooling chillers and VSD pumps.

IDEAL SOLUTION FOR SCALABLE


DATA CENTERS

Thanks to its plug & play philosophy, HPC is prefect for those Data Centers that are built to be fully
occupied gradually, in different phases, making it the ideal solution for your business development.
HPC HOW IT
Hydronic Plant Connect WORKS

The HPC control logics enhance the system efficiency leveraging on partial loads,
redundant units, and favourable ambient conditions.
HPC bases its operation on proprietary logics and devices:

INDOOR LAN GROUPS


KIPLINK
LAN MULTI MANAGER

HPC INFRASTRUCTURE

One group of external chillers (up to 8 units) is


connected to up to 20 groups of indoor chilled LAN MULTI MANAGER
water units (up to 15 units per group).

Communication between indoor and outdoor


units is achieved by KIPLAN, the ethernet cabled
network that links each LAN group with the others. KIPLAN
INDOOR
LAN
GROUP

KIPLAN is managed by the KIP Master chiller, which


collects information from the KIP client of each
indoor LAN group.

HPC analyzes the data and the optimized


parameters are sent to all connected units.

6/7
6/7

HPC Logics

HPC PUSHES THE BOUNDARIES ON INTEGRATED CONTROL LOGICS

HPC manages chillers, CRAHs, and pumps, optimizing the entire chilled water system. Starting from the
operating conditions of each single component, HPC adjusts the working parameters to maximize the
overall efficiency. Optimization is always done with cooling dependability in mind, making HPC safe to use
in any type of conditions the data center faces.

1 Reset
IT cooling load satisfaction is paramount. 2 Reduction
HPC always gives priority to cooling dependability.
3 Optimization on
Therefore, actions are taken on the basis of the indoor unit groups’ status.
4 No action
There are 4 operating modes:

PRIORITY COOLING LOAD MODE ACTION


Suddenly HPC contribution is reset and the system immediately
RESET
increases increases the cooling capacity.
Slightly HPC contribution is reduced.
REDUCTION
increases The system increases the cooling capacity.
Stable or
OPTIMIZATION ON HPC actively optimizes the system.
decreases
Stable or HPC has already pushed the system to the best performance
NO ACTION
decreases possible in current conditions. No further action is taken.

HPC acts on time intervals and the main variables taken into consideration are:

 Cooling demand of each indoor unit group


(room temperature, fans’ speed, valve opening)

 Chilled water temperature

 Pumps’ speed

 Chillers’ group operating status


(outdoor air temperature, FC availability)
LAN MULTI MANAGER LAN FUNCTIONS

HPC relies on proprietary and chiller integrated LAN Multi Manager logics.

CHILLER LAN FUNCTIONS


LAN Multi Manager allows one to
create a single group of chillers  Dynamic Master  Resource priority management
(up to 8 units) managing the units  Load distribution or  Group fast restart
as one, performing several group saturation
functions and providing system  Pump management
dependability.  Stand-by management with
 Auxiliary inputs
automatic or forced rotation

DYNAMIC MASTER

If the master unit becomes disconnected, the


Dynamic Master logic automatically elects a
new master from the other units, allowing the
chillers to continue working.

Candidate master units with different


succession priority can be set by the client.

LOAD MANAGEMENT
Cooling loads are smartly managed according to the data center’s needs

1. DISTRIBUTION

The load is distributed equally among the


active units of the group.

2. SATURATION

Units are exploited at their maximum;


only the ones necessary work.

8/9
8/9

RESOURCE PRIORITY MANAGEMENT

The outdoor group is set to exploit the most advantageous cooling


technology. Free cooling chillers will have the highest working priority,
if free cooling operation is available.

Furthermore, when units with different compressor technologies are


in the same system, it is possible to set different working priorities
exploiting the most advantageous and efficient one.

STAND-BY UNIT MANAGEMENT


WITH AUTOMATIC OR FORCED ROTATION

 Automatic or manual rotation of units according


to the restart priority and running hours equalization;
 Immediate activation in case of a unit failure,
disconnection, or emergency load levels.

GROUP FAST RESTART

This function allows the IT facility manager to set a configurable


start up sequence on the basis of priority and working hours.
 No simultaneous start-ups of different unit compressors.
FAST
Always the most advantageous cooling technology. If free
Restart
cooling operation is available, it is given the highest priority.

PUMP AND AUXILIARY INPUT MANAGEMENT

HPC Multi Manager not only manages the correct operation of the chillers, but it also
controls pumps and auxiliary input management.
 Pump controls are available both for individual and centralized pump group configurations
(on/off, VPF, 2PS, etc.)
Auxiliary inputs are applied at a group level (group set-point adjustment, group demand limit, etc)
INDOOR LAN GROUPS LAN FUNCTIONS

HPC works on the basis of proprietary and CRAH unit integrated LAN logics

INDOOR LAN FUNCTIONS


Indoor LAN allows one to create
a group of CRAHs (up to 15  Dynamic Master  Active Distribution Load
units) managing the units as
one, performing several group  Stand-by and back up unit  T&H average management
functions and providing management and Local T Protection
system dependability.
 Active Fan on Stand-by  Active Pressure Load and
Local Pressure Protection

DYNAMIC MASTER

The Dynamic Master logics


automatically elect a new Master
from all other units connected in
the same LAN when the master
unit fails.

Thus, the group will continue to


operate.

STAND-BY AND BACK UP UNIT MANAGEMENT

The rotation of the stand-by units can be automatically managed according to specific time bands,
alarms, and cooling load variations.
In the event of a unit breakdown or disconnection from the LAN, stand-by units are forced to activate.

10 / 11
10 / 11

ACTIVE FANS ON STAND-BY UNITS


Master Unit
AFS
In stand-by, reserve units do not ON

turn off their fans, but keep them Unit


03
Unit
04
Unit
03
Unit
04
70%

running at the speed set by the Unit


02
Unit
05
Unit
02
Unit
05
AFS
ON

parameters. 70%

Fans continous operation always M


Maste LAN
Unit
06 M
Maste
LAN Unit
06
Unit r Unit r Active Fan on Stand-by (AFS)
provides air flow, maintaining Stand-by
Stand-by Units: 2 Stand-by Units: 2

the desired pressure value. This Unit Unit Unit


AFS speed: 70%
Unit
10 07 10 07
means that the unit is ready to Unit
09
Unit
08
Unit
09
Unit
08

start up if necessary.

ACTIVE DISTRIBUTION LOAD

Instead of running a few units close to their maximum load, it distributes the thermal load required
among all units making them work at partial loads thus increasing efficiency. In case of a unit failure,
its cooling load is shared among other operating units thus increasing the system’s reliability.

PASSIVE REDUNDANCY ACTIVE REDUNDANCY

3 units 1 unit 4 units


AT 100% OFF Stand-by AT 75%

T&H AVERAGE MANAGEMENT ACTIVE PRESSURE LOAD


AND LOCAL PROTECTION AND LOCAL PROTECTION

The LAN connection can be exploited to The LAN connection allows the units to
manage units according to the average automatically adjust the fans’ speed according
humidity and temperature values, making all to average pressures read by each unit.
units work at uniform conditions. High and low pressure spots are monitored and
Hot and cold spots are monitored and locally/ locally/automatically managed by each single
automatically managed by each single unit. unit.
KIPlink: LOCAL AND REMOTE
MONITORING FUNCTIONS

KIPlink is an exclusive product of Mitsubishi Electric Hydronics & IT Cooling Systems.


You can monitor, control, and have full access to the unit from any device (PC, laptop, mobile
phone) thanks to the Wi-Fi generated by KIPlink and the ethernet connection.

EASIER ON-SITE OPERATION 3 REMOTE MONITORING

Exploiting the LAN connection, it is possible


to connect to the unit from everywhere
through your VPN.

Full access and control


of the unit is done via
WEB browser.

REAL-TIME GRAPHS AND TRENDS


Customer VPN
Secure accessibility to LAN
(cyber security in charge of customer)

4 KIPLAN
Proprietary protocol communication
between all Mitsubishi Electric Hydronics
DATA LOGGER FUNCTION & IT Cooling Systems units.

WHY DO YOU NEED A KIPLAN?

 To have a single entry point of HMI for


several units.
 To allow the Mitsubishi Electric Hydronics
& IT Cooling Systems indoor and outdoor
units to communicate with the HPC.

12 / 13
12 / 13

Infrastructure – KIPlink

2 LOCAL MONITORING 1 PROXIMITY SMART KEYBOARD

The unit is controlled locally by means of an Full access and control to the unit via Wi-Fi,
ethernet connection. Full access and control thanks to the Mitsubishi Electric Hydronics
of the unit is done via WEB browser. & IT Cooling Systems app.

Ethernet

WHAT KINDS OF NETWORKS ARE POSSIBLE?

Full WI-FI network Hybrid network Full ethernet cabled

Used when the


units are very close
(about 10 m).
Some units connected
in wi-fi mode, some
others connected with
HPC Cable network configuration
used when there is significant
distance between unit and
Hydronic Plant Connect
ethernet cable. when HPC is present.
HPC ENERGY ANALYSIS
Simulation software tested and validated
Hydronic Plant Connect in Mitsubishi Electric laboratories.

IT COOLING

PROJECT

This data center, located in London, needs to dissipate 1000 kW


from servers.
The analysis evaluates the significant savings of the new HPC
control logics compared to the traditional CRAH fan regulation.

CRAH FAN REGULATION VS HPC


Optimizes the indoor Optimizes the main components of the
unit fan speed. system: chiller, CRAHs, and pumps.

PLANT CONFIGURATION

3 x NR-FC-Z /A /0594 11x w-NEXT HD UK U 170 E10


Cooling capacity: 547 kW Cooling capacity: 118 kW
(25/18°C, 42°C) (30°C, 40%RH, 39600 m³/h)

EER: 3,31 EER: 17


(25/18°C, 42°C) (30°C, 40%RH, 39600 m³/h)

Length: 7430 mm Width: 3510 mm

The plant consists of 3 free cooling chillers (one redundant) and 11 CRAHs (one redundant). Each chiller has been equipped
with a pump controlled by VPF logics. The chiller choosen is a free cooling unit equipped with scroll compressors, an optimum
solution for the climate and the Data Center size.
The CRAH unit selected fulfills the requirements for capacity and unit number, at the same time providing good performances.

Plant operating conditions Economic conditions


Operating schedule: 7 days/week, continous operation Energy cost: 0,16 €/kWh
Return/deliver water set point: 25/18 °C Interest rate: 6% Inflation rate: 3%

14 / 15
14 / 15

ANNUAL ENERGY EFFICIENCY AND CONSUMPTION COMPARISON

Electrical Energy CRAH fan regulation: 100%


16 700 KWh/Year
HPC: 88.7%%
14
600
8000
12
500 7000

Energy absorbed [kW/year]


10
6000
400

Hours [h]
8
EER

5000
300
6 4000

200 3000
4
2000
2 100
1000

0 0 0
-6 -3 0 3 6 9 12 15 18 21 24 27 30 -6 -3 0 3 6 9 12 15 18 21 24 27 30
Outdoor temperature [°C] Outdoor temperature [°C]

Results
Energy
Consumption -11.3%
The results obtained comparing HPC and CRAH fan and slightly increases pump consumption.
regulation. The overall amount of energy saved by HPC The advanteges of HPC control logics are particular
logics is significant. HPC reduces the energy consumption by impressive in mid-to-low temperatures where the free cooling
11.3%. The main differences involves the CRAH units and the technology is exploited at its maximum.
pumps.
Moreover, payback values show a really interesting return on
Instead of wasting energy in the two-way valve, HPC acts on investment of 10 months making HPC a really remarkable
the setpoint, reduces the energy absorbed by the indoor unit solution for optimizing Data Centers’ cooling systems.

10 MONTHS -16800 €/year


1.200.000
2,0€
1.000.000 12,8%
Pumps 1,8 €
Energia absorbed [kWh/year]

17,0%
Pumps 1,6 €
800.000
Cumulative costs [M€]

46,9% 1,4 €
Chillers (w/o pumps)
600.000 1,2 €
52,9%
Chillers (w/o pumps) 1,0 €
400.000
0,1 €

40,3% 0,6 €
200.000
Indoor units 30,2%
Indoor units 0,4 €

0 0,2 €
CRAH fan regulation HPC 0 1 2 3 4 5 6 7 8 9 10
Years

AT A GLANCE

Power C02 saved Payback Annual energy


input saving per year period efficiency

122.634 kWh 674.48 tons 10 months +13 %


per year
Head Office: Via Caduti di Cefalonia 1 - 36061 Bassano del Grappa (VI) - Italy CV_HPC_06-2021_ENG

Tel (+39) 0424 509 500 - Fax (+39) 0424 509 509

You might also like