-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[4/4] - multi: integrate new rbf coop close FSM into the existing peer flow #8453
base: rbf-coop-fsm
Are you sure you want to change the base?
[4/4] - multi: integrate new rbf coop close FSM into the existing peer flow #8453
Conversation
Important Review skippedAuto reviews are limited to specific labels. Labels to auto review (1)
Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
8692815
to
f189dda
Compare
f189dda
to
ae5fd0d
Compare
Repurposing this to be the commits that integrates the new state machine into the daemon. New commit set coming shortly. Finalizing the itests, then will remove this from draft. |
ae5fd0d
to
9d76f2f
Compare
cbf3350
to
fd59d13
Compare
9d76f2f
to
b941ef2
Compare
Pushed a series of new commits that includes an e2e itest for the new RBF flow. Both sides can increase the fee rate for their version until one of them finally confirms. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so this does work. But I am afraid that I'm rather not fond of the dynamic nature of the message router and a lot of the switch code that handles multiplexing the different coop close protocols.
Right now it seems like we have made the peer responsible for managing the channel closure and I'm really not sure that's the right call. We have now introduced a new thread of control with respect to channel id message serialization and I think that can be problematic.
Protofsms always launch new threads afaict, and so now we have the main peer thread, the link thread, and the pfsm ccv2 thread all competing with one another for message ordering.
It occurs to me that the main weakness of protofsm is this requirement of always having a new thread to launch it. I find myself wanting a means of defining state machines that is composable such that the composition still shares the same control thread, lest we create more and more opportunities for concurrency issues.
Overall I can't find any issues with the actual implementation of the CCV2 protocol here. The tests look good. Some of the edges need to be sanded down. I also can't fully endorse the protofsm approach more broadly based off of what I see here.
peer/brontide.go
Outdated
// If a message router is active, then we'll try to have it | ||
// handle this message. If it can, then we're able to skip the | ||
// rest of the message handling logic. | ||
ok := fn.MapOptionZ(p.msgRouter, func(r MsgRouter) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: name this err
peer/brontide.go
Outdated
// If a message router is active, then we'll try to have it | ||
// handle this message. If it can, then we're able to skip the | ||
// rest of the message handling logic. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clever way to do incremental introduction of the message router.
} | ||
|
||
link.OnCommitOnce(htlcswitch.Outgoing, func() { | ||
if !link.DisableAdds(htlcswitch.Outgoing) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs to be rebased as @ellemouton changed the call signature of this function.
@@ -175,7 +175,7 @@ type FlushHookID uint64 | |||
|
|||
// LinkDirection is used to query and change any link state on a per-direction | |||
// basis. | |||
type LinkDirection bool | |||
type LinkDirection = bool |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I strongly object to this. We should absolutely not allow construction of these values with booleans. I chose boolean as an implementation detail. It should be opaque to the outside. If we want to finesse the package hierarchy I'd support moving the LinkDirection into a core types package.
@@ -0,0 +1,82 @@ | |||
package peer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this should be in the peer. It's all chan state stuff.
// We'll also compute the final fee rate that the remote party | ||
// paid based off the absolute fee and the size of the closing | ||
// transaction. | ||
vSize := mempool.GetTxVirtualSize(btcutil.NewTx(closeTx)) | ||
feeRate := chainfee.SatPerVByte( | ||
int64(msg.SigMsg.FeeSatoshis) / int64(vSize), | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should eventually fold this commit into the original I think.
// Ignore any potential duplicate channel flushed events. | ||
case *ChannelFlushed: | ||
return &CloseStateTransition{ | ||
NextState: c, | ||
}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we getting duplicate flushed events?? This makes me concerned about other parts of the design.
@@ -470,7 +470,7 @@ type Brontide struct { | |||
// cooperative channel closures. Any channel closing messages are directed | |||
// to one of these active state machines. Once the channel has been closed, | |||
// the state machine will be deleted from the map. | |||
activeChanCloses map[lnwire.ChannelID]chanCloserFsm | |||
activeChanCloses *lnutils.SyncMap[lnwire.ChannelID, chanCloserFsm] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🫡
@@ -122,8 +122,22 @@ type closeMsg struct { | |||
|
|||
// PendingUpdate describes the pending state of a closing channel. | |||
type PendingUpdate struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this in the brontide instead of the chancloser code?
"github.com/stretchr/testify/require" | ||
) | ||
|
||
func testCoopCloseRbf(ht *lntest.HarnessTest) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🫡
In this commit, we use the interfaces we created in the prior commit to make a new method capable of spinning up the new rbf coop closer.
In this commit, we add a new composite chanCloserFsm type. This'll allow us to store a single value that might be a negotiator or and rbf-er. In a follow up commit, we'll use this to conditionally create the new rbf closer.
In this commit, we fully integrate the new RBF close state machine into the peer. For the restart case after shutdown, we can short circuit the existing logic as the new FSM will handle retransmitting the shutdown message itself, and doesn't need to delegate that duty to the link. Unlike the existing state machine, we're able to restart the flow to sign a coop close with a new higher fee rate. In this case, we can now send multiple updates to the RPC caller, one for each newly singed coop close transaction. To implement the async flush case, we'll launch a new goroutine to wait until the state machine reaches the `ChannelFlushing` state, then we'll register the hook. We don't do this at start up, as otherwise the channel may _already_ be flushed, triggering an invalid state transition.
For now, we disallow the option to be used with the taproot chans option, as the new flow hasn't yet been updated for nonce usage.
We don't return an error on broadcast fail as the broadcast might have failed due to insufficient fees, or inability to be replaced, which may happen when one side attempts to unnecessarily bump their coop close fee.
This'll be useful to communicate what the new fee rate is to an RPC caller.
If we go to close while the channel is already flushed, we might get an extra event, so we can safely ignore it and do a self state transition.
With the new RBF based close, we'll actually close the same channel multiple times, so this check isn't required any longer.
This fixes some existing race conditions, as the `finalizeChanClosure` function was being called from outside the main event loop.
If we hit an error, we want to wipe the state machine state, which also includes removing the old endpoint.
This'll allow us to notify the caller each time a new coop close transaction with a higher fee rate is signed.
Resp is always nil, so we actually need to log event.Update here.
In this commit, we extend `CloseChannelAssertPending` with new args that returns the raw close status update (as we have more things we'd like to assert), and also allows us to pass in a custom fee rate.
The itest has both sides try to close multiple times, each time with increasing fee rates.
e27fd68
to
30be26b
Compare
@lightninglabs-deploy relax. |
This PR integrates the new RBF coop close FSM into the existing control flow in the peer struct. With the way the new state machine works in concert with the msg router, we actually need to create+register the new state machine for eligible channels as soon as the peer connection is established (
loadActiveChannels
). This is required as since these messages won't be part of the existing static switch in thereadHandler
, so if theMsgEndpoint
isn't registered from the very start, we'll fail to handle the messages (or they'll erroneously try to create the existing negotiation state machine).This PR can be divided into roughly 3 parts:
One point of discussion is that as is, in the main database, we'll only store the last coop close transaction we signed. Once confirmed, the wallet will know of the canonical version, but do we also want to store the complete series in the database as well? I think no, but thought it was worth explicitly calling out.
RPC wise, as long as the initial gRPC client that requested the coop close is still active, then we'll now send a new update event for each new RBF transaction signed. As is, we also send an update for both the local + remote coop close transactions (each side can now have an entirely distinct close txn). In contrast the existing coop close flow only ever sends a single update once the coop close is published, then another one after final confirmation.
TODO
Add opt out CLI args
Add itests