Lightning v2.5.2
Notable changes in this release
PyTorch Lightning
Changed
- Add
enable_autolog_hparamsargument to Trainer (#20593) - Add
toggled_optimizer(optimizer)method to the LightningModule, which is a context manager version oftoggle_optimizeanduntoggle_optimizer(#20771) - For cross-device local checkpoints, instruct users to install
fsspec>=2025.5.0if unavailable (#20780) - Check param is of
nn.Parametertype for pruning sanitization (#20783)
Fixed
- Fixed
save_hyperparametersnot working correctly withLightningCLIwhen there are parsing links applied on instantiation (#20777) - Fixed
logger_connectorhas an edge case where step can be a float (#20692) - Fixed Synchronize SIGTERM Handling in DDP to Prevent Deadlocks (#20825)
- Fixed case-sensitive model name (#20661)
- CLI: resolve jsonargparse deprecation warning (#20802)
- Fix: move
check_inputsto the target device if available duringto_torchscript(#20873) - Fixed progress bar display to correctly handle iterable dataset and
max_stepsduring training (#20869) - Fixed problem for silently supporting
jsonnet(#20899)
Lightning Fabric
Changed
- Ensure correct device is used for autocast when mps is selected as Fabric accelerator (#20876)
Removed
- Fix:
TransformerEnginePrecisionconversion for layers withbias=False(#20805)
Full commit list: 2.5.1 -> 2.5.2
Contributors
We thank all folks who submitted issues, features, fixes, and doc changes. It's the only way we can collectively make Lightning ⚡ better for everyone, nice job!
In particular, we would like to thank the authors of the pull-requests above, in no particular order:
@adamjstewart, @Armannas, @bandpooja, @Borda, @chanokin, @duydl, @GdoongMathew, @KAVYANSHTYAGI, @mauvilsa, @muthissar, @rustamzh, @siemdejong
Thank you ❤️ and we hope you'll keep them coming!