-
-
Notifications
You must be signed in to change notification settings - Fork 161
Enable Support for Multi-GPU Training #517
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
it down to `torch.save()` when caching datasets to disk
so that it can be restored when accessing `TabularDataset.data`
`validation_step()` and `test_step()` to distributed training
remove integer choices. Add a pointer to docs with possible options
|
@manujosephv wish i knew a release was coming or i would have push this up sooner, sorry about that |
|
@manujosephv I see some python 3.8 fails because the option context im using isn't available. let me know how you'd like me to address those |
61a68d4 to
a3f8242
Compare
|
@manujosephv I removed the commits that tried to address the All checks are passing now |
|
@sorenmacbeth Sorry for the delay. got caught up with some other things last week! Thanks for this PR.. Merging it :) |
FutureWarningandPerformanceWarningprecisionto string to match PLpickle_protocolinDataConfigto save very large DataModulesTabularDatasetI've tested this using the DDP strategy but I believe it will work with other PL strategies as well.
📚 Documentation preview 📚: https://pytorch-tabular--517.org.readthedocs.build/en/517/