Skip to content

Commit 36a4ac2

Browse files
committed
Fix race conditions in newly-added test.
Buildfarm has been failing sporadically on the new test. I was able to reproduce this by adding a random 0-10 s delay in the walreceiver, just before it connects to the primary. There's a race condition where node_3 is promoted before it has fully caught up with node_1, leading to diverged timelines. When node_1 is later reconfigured as standby following node_3, it fails to catch up: LOG: primary server contains no more WAL on requested timeline 1 LOG: new timeline 2 forked off current database system timeline 1 before current recovery point 0/30000A0 That's the situation where you'd need to use pg_rewind, but in this case it happens already when we are just setting up the actual pg_rewind scenario we want to test, so change the test so that it waits until node_3 is connected and fully caught up before promoting it, so that you get a clean, controlled failover. Also rewrite some of the comments, for clarity. The existing comments detailed what each step in the test did, but didn't give a good overview of the situation the steps were trying to create. For reasons I don't understand, the test setup had to be written slightly differently in 9.6 and 9.5 than in later versions. The 9.5/9.6 version needed node 1 to be reinitialized from backup, whereas in later versions it could be shut down and reconfigured to be a standby. But even 9.5 should support "clean switchover", where primary makes sure that pending WAL is replicated to standby on shutdown. It would be nice to figure out what's going on there, but that's independent of pg_rewind and the scenario that this test tests. Discussion: https://www.postgresql.org/message-id/b0a3b95b-82d2-6089-6892-40570f8c5e60%40iki.fi
1 parent eb93f3a commit 36a4ac2

File tree

1 file changed

+20
-13
lines changed

1 file changed

+20
-13
lines changed

src/bin/pg_rewind/t/008_min_recovery_point.pl

+20-13
Original file line numberDiff line numberDiff line change
@@ -50,54 +50,61 @@
5050
$node_1->safe_psql('postgres', 'CREATE TABLE public.bar (t TEXT)');
5151
$node_1->safe_psql('postgres', "INSERT INTO public.bar VALUES ('in both')");
5252

53-
54-
# Take backup
53+
#
54+
# Create node_2 and node_3 as standbys following node_1
55+
#
5556
my $backup_name = 'my_backup';
5657
$node_1->backup($backup_name);
5758

58-
# Create streaming standby from backup
5959
my $node_2 = get_new_node('node_2');
6060
$node_2->init_from_backup($node_1, $backup_name,
6161
has_streaming => 1);
6262
$node_2->start;
6363

64-
# Create streaming standby from backup
6564
my $node_3 = get_new_node('node_3');
6665
$node_3->init_from_backup($node_1, $backup_name,
6766
has_streaming => 1);
6867
$node_3->start;
6968

70-
# Stop node_1
69+
# Wait until node 3 has connected and caught up
70+
my $lsn = $node_1->lsn('insert');
71+
$node_1->wait_for_catchup('node_3', 'replay', $lsn);
7172

73+
#
74+
# Swap the roles of node_1 and node_3, so that node_1 follows node_3.
75+
#
7276
$node_1->stop('fast');
73-
74-
# Promote node_3
7577
$node_3->promote;
7678

77-
# node_1 rejoins node_3
78-
79+
# reconfigure node_1 as a standby following node_3
7980
my $node_3_connstr = $node_3->connstr;
80-
8181
$node_1->append_conf('postgresql.conf', qq(
8282
primary_conninfo='$node_3_connstr'
8383
));
8484
$node_1->set_standby_mode();
8585
$node_1->start();
8686

87-
# node_2 follows node_3
88-
87+
# also reconfigure node_2 to follow node_3
8988
$node_2->append_conf('postgresql.conf', qq(
9089
primary_conninfo='$node_3_connstr'
9190
));
9291
$node_2->restart();
9392

94-
# Promote node_1
93+
#
94+
# Promote node_1, to create a split-brain scenario.
95+
#
96+
97+
# make sure node_1 is full caught up with node_3 first
98+
$lsn = $node_3->lsn('insert');
99+
$node_3->wait_for_catchup('node_1', 'replay', $lsn);
95100

96101
$node_1->promote;
97102

103+
#
98104
# We now have a split-brain with two primaries. Insert a row on both to
99105
# demonstratively create a split brain. After the rewind, we should only
100106
# see the insert on 1, as the insert on node 3 is rewound away.
107+
#
101108
$node_1->safe_psql('postgres', "INSERT INTO public.foo (t) VALUES ('keep this')");
102109

103110
# Insert more rows in node 1, to bump up the XID counter. Otherwise, if

0 commit comments

Comments
 (0)