Zabbix TimescaleDB
Zabbix TimescaleDB
PREMIUM PARTNER
NORDIC Excellence
Locations
▪ Skovlunde
Powerhouse ▪ Aarhus
of databases ▪ Stockholm
▪ Prague
Implementing
TimescaleDB without
downtime
How we implemented TimescaleDB partitioning and compression on a
large Zabbix installation without downtime
Before we
start, please
vote for this
ZBXNEXT:
Our Zabbix
Monitoring our customers databases, servers and applications
We are the designated Database Administrators (DBA’s)
Zabbix is our tool
Internally
• Zabbix as our own internal monitoring tool
Typically, we help setting up the monitoring together with the customer, and
after this, the customer maintains the monitoring them selves, with support
from us.
Zabbix Support subscriptions (MSP) are part of this, Miracle42 as 1st level
support and Zabbix Support as 2nd level.
Our Zabbix: System information
Our Zabbix: Dashboards
Our Zabbix: Dashboards
Our Zabbix: Dashboards
Our Ambition
WEB
Customers firewall
Customers firewall
Firewall
Servers
Proxies
VIP#2
@customers
VIP#1
IP#1 IP#2
Postgresql
TimescaleDB
Why TimescaleDB ?
Our Zabbix database has 4 Tb of data and is growing.
House keeper constantly running, cannot keep up.
https://www.zabbix.com/documentation/6.0/en/manual/appendix/install/timescaledb
Zabbix - TimescaleDB
https://www.zabbix.com/documentation/6.0/en/manual/appendix/install/timescaledb
Concept
1. Create new table
2. Register new table with TimescaleDB
3. Create trigger on current table, to populate new table
4. Load data from current table to new table
5. Switch tables
Migrating to TimescaleDB
history a b c d e f g h i j k l m n o p q r
Time
Migrating to TimescaleDB
history a b c d e f g h i j k l m n o p q r s t
Time
Migrating to TimescaleDB
SQL> CREATE TABLE history_new (
LIKE history
INCLUDING DEFAULTS
INCLUDING CONSTRAINTS
INCLUDING INDEXES
);
history a b c d e f g h i j k l m n o p q r s t
history_new
Migrating to TimescaleDB
Concept
1. Create new table
2. Register new table with TimescaleDB
3. Create trigger on current table, to populate new table
4. Load data from current table to new table
5. Switch tables
Migrating to TimescaleDB
SQL> select create_hypertable
('history_new', 'clock', chunk_time_interval => 86400, migrate_data => true);
history a b c d e f g h i j k l m n o p q r s t
history_new
Migrating to TimescaleDB
Concept
1. Create new table
2. Register new table with TimescaleDB
3. Create trigger on current table, to populate new table
4. Load data from current table to new table
5. Switch tables
Migrating to TimescaleDB
SQL> create or replace function insert_history_m42()
...
begin trigger
insert into history_new
select * from inserted_rows;
...
history a b c d e f g h i j k l m n o p q r s t
history_new
trigger
history a b c d e f g h i j k l m n o p q r s t u
history_new u
Concept
1. Create new table
2. Register new table with TimescaleDB
3. Create trigger on current table, to populate new table
4. Load data from current table to new table
5. Switch tables
Migrating to TimescaleDB
SQL>
for r in select itemid...
loop trigger
insert into history_new
select * from history as t where...
end loop;
history a b c d e f g h i j k l m n o p q r s t u
history_new a b u
trigger
history a b c d e f g h i j k l m n o p q r s t u v
history_new a b c d e f u v
Migrating to TimescaleDB
trigger
history a b c d e f g h i j k l m n o p q r s t u v
history_new a b c d e f g h i u v
Migrating to TimescaleDB
trigger
history a b c d e f g h i j k l m n o p q r s t u v w
history_new a b c d e f g h i j k l m n u v w
Migrating to TimescaleDB
trigger
history a b c d e f g h i j k l m n o p q r s t u v w
history_new a b c d e f g h i j k l m n o p q r s t u v w
Migrating to TimescaleDB
Concept
1. Create new table
2. Register new table with TimescaleDB
3. Create trigger on current table, to populate new table
4. Load data from current table to new table
5. Switch tables
Migrating to TimescaleDB
SQL> alter table history rename to history_old;
SQL> commit;
history_old a b c d e f g h i j k l m n o p q r s t u v w
history a b c d e f g h i j k l m n o p q r s t u v w
Migrating to TimescaleDB
SQL> drop table history_old;
SQL> commit;
history a b c d e f g h i j k l m n o p q r s t u v w
5. Switch tables
Warning: Be careful with database locks (!)
5. Switch tables
Warning: Be careful with database locks (!)
The reason for this significant drop in size was not compression
(which has not yet been enabled), but that old data was dropped,
when their partitions was dropped.
The cause for this was, that the housekeeper had not been able to keep
up with the data coming in, and for this reason, our database had
grown a lot more than it was supposed to.
Result
Before
After
Lessons learned
1) Although the Housekeeper process does not run all the time,
it can be trailing behind in the work it should do.
The issue seems to be with the logic by which the Zabbix server
decides, if it can compress the partitions.
For some reason, Zabbix really did not want to compress our partitions.
Compression
I downloaded the Zabbix source code, found the place where
compression of chunks are set, and backtraced the code to
housekeeper’s main loop.
# zabbix_server -R log_level_increase=452428
# zabbix_server -R log_level_increase=452428
# zabbix_server -R housekeeper_execute
Compression
/var/log/zabbix/zabbix_server.log:
I verified in the front end that compression was disabled and restarted
the Zabbix server. And then switched on compression again:
Internet
M42 Zabbix: Reporting project
Reports
API
DWH
Thank You for Your time!
Questions ?
Feel free to contact me at