Citus Installation and Configuration
Citus Installation and Configuration
Citus Installation and Configuration
PostgreSQL 15.8
Citus 12.1-1
Since I have created four separate clusters on the same node, you are seeing the same IP
address. Internally, the clusters are managed through different ports.
Worker Nodes:
You'll notice that data rebalancing occurs from the sharded nodes, not from the coordinator
(primary) node.
Execute: Select * from citus_shard
table_name | shardid | shard_name | citus_table_type | colocation_id | nodename |
nodeport | shard_size
------------------+---------+-------------------------+------------------+---------------+------------+----------+----
--------
pgbench_accounts | 102328 | pgbench_accounts_102328 | distributed | 12 | 10.88.11.8
| 5433 | 146989056
pgbench_accounts | 102328 | pgbench_accounts_102328 | distributed | 12 | 10.88.11.8
| 5434 | 146989056
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
-> Index Scan using pk_accounts on pgbench_accounts (cost=0.56..8.58 rows=1 width=352) (actual time=0.104..0.107 rows=1
loops=1)
shardeg=#
shardeg=# explain (analyze) select * from pgbench_accounts where aid=30 limit 50;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------
Custom Scan (Citus Adaptive) (cost=0.00..0.00 rows=0 width=0) (actual time=12.721..12.724 rows=1 loops=1)
Task Count: 1
-> Task
Data is being retrieved from the shard node, while in both cases, the query is executed from the
primary node.
Native Partitioning:
Consider leveraging PostgreSQL's native partitioning features within AWS RDS.
Although not as robust as Citus for handling distributed workloads, these features can
still enhance performance and manageability for large tables.