confusions about torchmetrics in pytorch_lightning #19358
Unanswered
KasuganoLove
asked this question in
Lightning Trainer API: Trainer, LightningModule, LightningDataModule
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
According to:
We are recommended to instantiate three torchmetrics (including test) when logging the metric object and letting Lightning take care of when to reset the metric etc. Here is the official code (without test):
My question is:
torchmetrics.reset()on_train_epoch_end,on_validation_epoch_endandon_test_epoch_end, we only need one torchmetric to calculate all, is that right?torchmetrics.forward()to calculate the metrics of the inputs, the internal state doen't matter (eventorchmetrics.reset()is redundant), is that right?torchmetricswith the internal state (likeFIDscores), sincequestion 1, we only need one torchmetric to calculate all, is that right?torchmetrics.compute()in my second paragraph of code still works properly underddpmode ?Here is my code for torchmetrics which just use
torchmetrics.forward()to calculate the metrics of the inputs:Here is my code for torchmetrics with useful internal state (like
FIDscores):Beta Was this translation helpful? Give feedback.
All reactions