Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[4/4] - multi: integrate new rbf coop close FSM into the existing peer flow #8453

Open
wants to merge 32 commits into
base: rbf-coop-fsm
Choose a base branch
from

Conversation

Roasbeef
Copy link
Member

@Roasbeef Roasbeef commented Feb 1, 2024

This PR integrates the new RBF coop close FSM into the existing control flow in the peer struct. With the way the new state machine works in concert with the msg router, we actually need to create+register the new state machine for eligible channels as soon as the peer connection is established (loadActiveChannels). This is required as since these messages won't be part of the existing static switch in the readHandler, so if the MsgEndpoint isn't registered from the very start, we'll fail to handle the messages (or they'll erroneously try to create the existing negotiation state machine).

This PR can be divided into roughly 3 parts:

  • Creating new structs to satisfy the interface that the env of the new FSM needs.
  • Updating the peer struct to register the state machine with the message router and also handle restart cases.
  • Adding itests that exercise the RBF loop.

One point of discussion is that as is, in the main database, we'll only store the last coop close transaction we signed. Once confirmed, the wallet will know of the canonical version, but do we also want to store the complete series in the database as well? I think no, but thought it was worth explicitly calling out.

RPC wise, as long as the initial gRPC client that requested the coop close is still active, then we'll now send a new update event for each new RBF transaction signed. As is, we also send an update for both the local + remote coop close transactions (each side can now have an entirely distinct close txn). In contrast the existing coop close flow only ever sends a single update once the coop close is published, then another one after final confirmation.

TODO

  • Add opt out CLI args

  • Add itests

Copy link
Contributor

coderabbitai bot commented Feb 1, 2024

Important

Review skipped

Auto reviews are limited to specific labels.

Labels to auto review (1)
  • llm-review

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@Roasbeef Roasbeef force-pushed the new-co-op-close-state-machine-final branch from 8692815 to f189dda Compare February 29, 2024 17:18
@Roasbeef Roasbeef force-pushed the new-co-op-close-state-machine-final branch from f189dda to ae5fd0d Compare February 29, 2024 22:54
@Roasbeef Roasbeef changed the base branch from master to peer-msg-router February 29, 2024 22:57
@Roasbeef Roasbeef changed the title lnwallet/chancloser: add new protofsm based RBF chan closer multi: integrate new rbf coop close FSM into the existing peer flow Mar 1, 2024
@Roasbeef
Copy link
Member Author

Roasbeef commented Mar 1, 2024

Repurposing this to be the commits that integrates the new state machine into the daemon. New commit set coming shortly. Finalizing the itests, then will remove this from draft.

@Roasbeef Roasbeef force-pushed the new-co-op-close-state-machine-final branch from ae5fd0d to 9d76f2f Compare March 1, 2024 01:47
@Roasbeef Roasbeef changed the base branch from peer-msg-router to rbf-coop-fsm March 1, 2024 01:47
@Roasbeef Roasbeef force-pushed the rbf-coop-fsm branch 2 times, most recently from cbf3350 to fd59d13 Compare March 5, 2024 05:57
@Roasbeef Roasbeef force-pushed the new-co-op-close-state-machine-final branch from 9d76f2f to b941ef2 Compare March 5, 2024 06:20
@Roasbeef Roasbeef marked this pull request as ready for review March 5, 2024 06:21
@Roasbeef
Copy link
Member Author

Roasbeef commented Mar 8, 2024

Pushed a series of new commits that includes an e2e itest for the new RBF flow. Both sides can increase the fee rate for their version until one of them finally confirms.

@Roasbeef Roasbeef changed the title multi: integrate new rbf coop close FSM into the existing peer flow [4/4] - multi: integrate new rbf coop close FSM into the existing peer flow Mar 8, 2024
@saubyk saubyk added this to the v0.18.0 milestone Mar 10, 2024
@saubyk saubyk modified the milestones: v0.18.0, v0.18.1 Mar 21, 2024
Copy link
Collaborator

@ProofOfKeags ProofOfKeags left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok so this does work. But I am afraid that I'm rather not fond of the dynamic nature of the message router and a lot of the switch code that handles multiplexing the different coop close protocols.

Right now it seems like we have made the peer responsible for managing the channel closure and I'm really not sure that's the right call. We have now introduced a new thread of control with respect to channel id message serialization and I think that can be problematic.

Protofsms always launch new threads afaict, and so now we have the main peer thread, the link thread, and the pfsm ccv2 thread all competing with one another for message ordering.

It occurs to me that the main weakness of protofsm is this requirement of always having a new thread to launch it. I find myself wanting a means of defining state machines that is composable such that the composition still shares the same control thread, lest we create more and more opportunities for concurrency issues.

Overall I can't find any issues with the actual implementation of the CCV2 protocol here. The tests look good. Some of the edges need to be sanded down. I also can't fully endorse the protofsm approach more broadly based off of what I see here.

peer/brontide.go Outdated
// If a message router is active, then we'll try to have it
// handle this message. If it can, then we're able to skip the
// rest of the message handling logic.
ok := fn.MapOptionZ(p.msgRouter, func(r MsgRouter) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: name this err

peer/brontide.go Outdated
Comment on lines 1685 to 2055
// If a message router is active, then we'll try to have it
// handle this message. If it can, then we're able to skip the
// rest of the message handling logic.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clever way to do incremental introduction of the message router.

}

link.OnCommitOnce(htlcswitch.Outgoing, func() {
if !link.DisableAdds(htlcswitch.Outgoing) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be rebased as @ellemouton changed the call signature of this function.

@@ -175,7 +175,7 @@ type FlushHookID uint64

// LinkDirection is used to query and change any link state on a per-direction
// basis.
type LinkDirection bool
type LinkDirection = bool
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly object to this. We should absolutely not allow construction of these values with booleans. I chose boolean as an implementation detail. It should be opaque to the outside. If we want to finesse the package hierarchy I'd support moving the LinkDirection into a core types package.

@@ -0,0 +1,82 @@
package peer
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this should be in the peer. It's all chan state stuff.

// We'll also compute the final fee rate that the remote party
// paid based off the absolute fee and the size of the closing
// transaction.
vSize := mempool.GetTxVirtualSize(btcutil.NewTx(closeTx))
feeRate := chainfee.SatPerVByte(
int64(msg.SigMsg.FeeSatoshis) / int64(vSize),
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should eventually fold this commit into the original I think.

Comment on lines +587 to +588
// Ignore any potential duplicate channel flushed events.
case *ChannelFlushed:
return &CloseStateTransition{
NextState: c,
}, nil
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we getting duplicate flushed events?? This makes me concerned about other parts of the design.

@@ -470,7 +470,7 @@ type Brontide struct {
// cooperative channel closures. Any channel closing messages are directed
// to one of these active state machines. Once the channel has been closed,
// the state machine will be deleted from the map.
activeChanCloses map[lnwire.ChannelID]chanCloserFsm
activeChanCloses *lnutils.SyncMap[lnwire.ChannelID, chanCloserFsm]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🫡

@@ -122,8 +122,22 @@ type closeMsg struct {

// PendingUpdate describes the pending state of a closing channel.
type PendingUpdate struct {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this in the brontide instead of the chancloser code?

"github.com/stretchr/testify/require"
)

func testCoopCloseRbf(ht *lntest.HarnessTest) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🫡

In this commit, we use the interfaces we created in the prior commit to
make a new method capable of spinning up the new rbf coop closer.
In this commit, we add a new composite chanCloserFsm type. This'll allow
us to store a single value that might be a negotiator or and rbf-er.

In a follow up commit, we'll use this to conditionally create the new
rbf closer.
In this commit, we fully integrate the new RBF close state machine into
the peer.

For the restart case after shutdown, we can short circuit the existing
logic as the new FSM will handle retransmitting the shutdown message
itself, and doesn't need to delegate that duty to the link.

Unlike the existing state machine, we're able to restart the flow to
sign a coop close with a new higher fee rate. In this case, we can now
send multiple updates to the RPC caller, one for each newly singed coop
close transaction.

To implement the async flush case, we'll launch a new goroutine to wait
until the state machine reaches the `ChannelFlushing` state, then we'll
register the hook. We don't do this at start up, as otherwise the
channel may _already_ be flushed, triggering an invalid state
transition.
For now, we disallow the option to be used with the taproot chans
option, as the new flow hasn't yet been updated for nonce usage.
We don't return an error on broadcast fail as the broadcast might have failed due to insufficient fees, or inability to be replaced, which may happen when one side attempts to unnecessarily bump their coop close fee.
This'll be useful to communicate what the new fee rate is to an RPC caller.
If we go to close while the channel is already flushed, we might get an extra event, so we can safely ignore it and do a self state transition.
With the new RBF based close, we'll actually close the same channel
multiple times, so this check isn't required any longer.
This fixes some existing race conditions, as the `finalizeChanClosure`
function was being called from outside the main event loop.
If we hit an error, we want to wipe the state machine state, which also
includes removing the old endpoint.
This'll allow us to notify the caller each time a new coop close
transaction with a higher fee rate is signed.
Resp is always nil, so we actually need to log event.Update here.
In this commit, we extend `CloseChannelAssertPending` with new args that
returns the raw close status update (as we have more things we'd like to
assert), and also allows us to pass in a custom fee rate.
The itest has both sides try to close multiple times, each time with
increasing fee rates.
@Roasbeef Roasbeef force-pushed the new-co-op-close-state-machine-final branch from e27fd68 to 30be26b Compare September 24, 2024 07:22
@Roasbeef
Copy link
Member Author

@lightninglabs-deploy relax.

@lightninglabs-deploy
Copy link

@Crypt-iQ: review reminder
@Roasbeef, remember to re-request review from reviewers when ready

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
P1 MUST be fixed or reviewed
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

[bug]: unable to retry coop close attempt with higher fee rate (between two nodes running v0.15.3)
4 participants