
利用多个空闲GPU,提高模型训练推理速度
import torch
from gpuinfo import GPUInfo
# 查询空闲GPU
gpu_ids = GPUInfo.check_empty()
# 设置模型初始化设备
device = torch.device(f"cuda:{str(gpu_ids[0])}" if gpu_ids else "cpu")
# 数据并行 model = xxxxxx
model = torch.nn.DataParallel(model, device_ids=gpu_ids)
# 模型加载到初始化设备中
model = model.to(device)
# 模型计算数据并行
for line in dataloader:
# 数据不需要加载到相同的设备中
moded(line)
# ......
DistributedDataParallel 相对于 DataParallel的优势主要在模型训练阶段,模型推理时由于不涉及参数更新不需要进程间通信,所以推理速度应该相差不大
# The difference between DistributedDataParallel and DataParallel is: # DistributedDataParallel uses multiprocessing where a process is created for each GPU, # while DataParallel uses multithreading. By using multiprocessing, each GPU has its dedicated process, # this avoids the performance overhead caused by GIL of Python interpreter.
模型训练时分布式多进程数据并行可结合这两个类改写
torch.nn.parallel.DistributedDataParallel
torch.utils.data.distributed.DistributedSampler
Distributed Data Parallel
(有时间再写一版)
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)