Skip to content

Use torch distributed#123

Open
mucunwuxian wants to merge 2 commits intoHRNet:pytorch-v1.1from
mucunwuxian:use_torch_distributed
Open

Use torch distributed#123
mucunwuxian wants to merge 2 commits intoHRNet:pytorch-v1.1from
mucunwuxian:use_torch_distributed

Conversation

@mucunwuxian
Copy link

First thanks a lot for your work!
I love the structure of HRNet. 👍✨

Now, I wondered how the GPU parallel method for learning and that for testing were different.
I also noticed that ISSUE has been talking about how to deal with when only one GPU can be used.
In addition, I can't use "nn.DataParallel" because I use RTX2080, so I'm glad that it is unified with "nn.parallel.DistributedDataParallel".

So I've listed this fix.
How about this?

Sincerely yours,
Mucun

@mucunwuxian mucunwuxian changed the base branch from master to pytorch-v1.1 April 12, 2020 07:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant