Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some more parallel copy fixes #5924

Merged
merged 12 commits into from
Apr 3, 2025
Merged

Some more parallel copy fixes #5924

merged 12 commits into from
Apr 3, 2025

Conversation

lutter
Copy link
Collaborator

@lutter lutter commented Apr 1, 2025

No description provided.

@lutter lutter requested a review from zorancv April 1, 2025 21:12
@lutter lutter force-pushed the lutter/copy-parallel2 branch from e21f67d to acfba0e Compare April 2, 2025 19:18
Copy link
Contributor

@zorancv zorancv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me.

lutter added 12 commits April 2, 2025 14:32
We need to be absolutely sure that when `copy_data_internal` is done, we
have a connection in `self.conn` and therefore want to make it clear when
we might exit early with an error
Otherwise, they don't really run in parallel
10,000 seems too big and actually slows things down
This ensures that `copy_data` can't be called more than once on any
instance; when copying encounters an error, it might leave the
CopyConnection in an inconsistent state and should therefore not be reused

Also make `copy_data_internal` private; it should never be called from the
outside
@lutter lutter force-pushed the lutter/copy-parallel2 branch from acfba0e to 843278a Compare April 2, 2025 21:32
@lutter lutter merged commit 843278a into master Apr 3, 2025
6 checks passed
@lutter lutter deleted the lutter/copy-parallel2 branch April 3, 2025 19:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants