Skip to content

Rollback ddp_tutorial fix #1621

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
resolve shen cmt on "Rollback ddp_tutorial fix"
Summary:

#1618 was merged unintentionally before all the cmts were resolved.
Also did a minior fix to skip demo parallel when ngpus is < 4

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
  • Loading branch information
bowangbj committed Jul 30, 2021
commit 4ddc9d1c65e955b0b1ce659394ee454b3285c250
12 changes: 7 additions & 5 deletions intermediate_source/ddp_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -285,8 +285,10 @@ either the application or the model ``forward()`` method.

if __name__ == "__main__":
n_gpus = torch.cuda.device_count()
assert n_gpus >= 8, f"Requires at least 2 GPUs to run, but got {n_gpus}"
world_size = n_gpus
run_demo(demo_basic, world_size)
run_demo(demo_checkpoint, world_size)
run_demo(demo_model_parallel, world_size)
assert n_gpus >= 2, f"Requires at least 2 GPUs to run, but got {n_gpus}"
run_demo(demo_basic, n_gpus)
run_demo(demo_checkpoint, n_gpus)
if n_gpus < 4:
print("Skipped demo_model_parallel since it requires >= 4 GPUs.")
else:
run_demo(demo_model_parallel, world_size)
You are viewing a condensed version of this merge commit. You can view the full changes here.