Showing posts with label python lightning. Show all posts
Showing posts with label python lightning. Show all posts

4/08/2022

Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance

 

Try #1

from pytorch_lightning.strategies.ddp import DDPStrategy
trainer = pl.Trainer(
strategy = DDPStrategy(find_unused_parameters=False),
accelerator = 'gpu',
devices = 3
)

..

Try #2

from pytorch_lightning.plugins import DDPPlugin

trainer = pl.Trainer(
val_check_interval=0.1,
gpus=-1,
accelerator="ddp",
callbacks=[checkpoint_callback, early_stop_callback],
plugins=DDPPlugin(find_unused_parameters=False),
precision=16,
)

..


Thank you.


4/06/2022

pytorch lightning change checkpoints output dir and version str

 


Use "save_dir",  "name",  "version" params in pl_loggers function.

more detail : https://pytorch-lightning.readthedocs.io/en/stable/extensions/generated/pytorch_lightning.loggers.TensorBoardLogger.html

..

from pytorch_lightning import loggers as pl_loggers
tb_logger = pl_loggers.TensorBoardLogger(save_dir="logs", name='no_default', version='my_version_2')

trainer = Trainer(max_epochs=num_epochs, callbacks=[checkpoint_callback], logger=tb_logger)

..



And you can retrieve what is my logging path by:

tb_logger.log_dir or self.logger.log_dir (in class)

> logs/no_default/my_version_2


Thank you.

www.marearts.com

πŸ™‡πŸ»‍♂️

4/04/2022

skip sanity val check in pytorch lightning

 

set trainer like this:

> trainer = Trainer(num_sanity_val_steps=0)

Thank you.