Pytorch分布式训练错误

阅读: 评论:0

Pytorch分布式训练错误

Pytorch分布式训练错误

subprocess.CalledProcessError: Command ‘[’/home/labpos/anaconda3/envs/idr/bin/python’, ‘-u’, ‘main_distribute.py’, ‘–local_rank=1’]’ returned non-zero exit status 1.

pytorch DistributedDataParallel训练时遇到的问题

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn’t able to locate the output tensors in the return value of your module’s forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /opt/conda/conda-

在DistributedDataParallel 中加入find_unused_parameters=True

 backbone = parallel.DistributedDataParallel(module=backbone, find_unused_parameters=True, broadcast_buffers=False, device_ids=[local_rank])ain()

本文发布于:2024-01-29 15:21:04,感谢您对本站的认可!

本文链接:https://www.4u4v.net/it/170651286716213.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:分布式   错误   Pytorch
留言与评论(共有 0 条评论)
   
验证码:

Copyright ©2019-2022 Comsenz Inc.Powered by ©

网站地图1 网站地图2 网站地图3 网站地图4 网站地图5 网站地图6 网站地图7 网站地图8 网站地图9 网站地图10 网站地图11 网站地图12 网站地图13 网站地图14 网站地图15 网站地图16 网站地图17 网站地图18 网站地图19 网站地图20 网站地图21 网站地图22/a> 网站地图23