| # 从PyTorch DDP 到 Accelerate 到 Trainer,轻松掌握分布式训练 | |
| ## 总体回顾 | |
| 本教程建立在假定你已经对于PyToch训练一个简单模型有一定的基础理解上。本教程将展示通过调用DDP进程在多个GPU上训练的过程,并且通过三个不同等级的抽象方法展示: | |
| - 通过pytorch.distributed模块的原生pytorch DDP | |
| - 利用 🤗 关于pytorch.distributed的轻量加速封装确保程序可以在零代码或低代码修改的情况下在当个GPU或TPU下正常运行 | |
| - 利用 🤗 Transformer的高级Trainer API ,该API抽象类所有模板代码并且支持不同设备和分布式场景。 | |
| ## 什么是分布式训练,为什么他很重要? | |
| 下面展示了基本的PyTorch训练代码,根据[官方MNIST示例](https://github.com/pytorch/examples/blob/main/mnist/main.py),设置并训练MNIST模型 | |
| ```python | |
| import torch | |
| import torch.nn as nn | |
| import torch.nn.functional as F | |
| import torch.optim as optim | |
| from torchvision import datasets, transforms | |
| class BasicNet(nn.Module): | |
| def __init__(self): | |
| super().__init__() | |
| self.conv1 = nn.Conv2d(1, 32, 3, 1) | |
| self.conv2 = nn.Conv2d(32, 64, 3, 1) | |
| self.dropout1 = nn.Dropout(0.25) | |
| self.dropout2 = nn.Dropout(0.5) | |
| self.fc1 = nn.Linear(9216, 128) | |
| self.fc2 = nn.Linear(128, 10) | |
| self.act = F.relu | |
| def forward(self, x): | |
| x = self.act(self.conv1(x)) | |
| x = self.act(self.conv2(x)) | |
| x = F.max_pool2d(x, 2) | |
| x = self.dropout1(x) | |
| x = torch.flatten(x, 1) | |
| x = self.act(self.fc1(x)) | |
| x = self.dropout2(x) | |
| x = self.fc2(x) | |
| output = F.log_softmax(x, dim=1) | |
| return output | |
| ``` | |
| 我们定义训练设备(cuda): | |
| ```python | |
| device = "cuda" | |
| ``` | |
| 构建一些基本的PyTorch DataLoaders: | |
| ```python | |
| transform = transforms.Compose([ | |
| transforms.ToTensor(), | |
| transforms.Normalize((0.1307), (0.3081)) | |
| ]) | |
| train_dset = datasets.MNIST('data', train=True, download=True, transform=transform) | |
| test_dset = datasets.MNIST('data', train=False, transform=transform) | |
| train_loader = torch.utils.data.DataLoader(train_dset, shuffle=True, batch_size=64) | |
| test_loader = torch.utils.data.DataLoader(test_dset, shuffle=False, batch_size=64) | |
| ``` | |
| 把模型放入CUDA设备: | |
| ```python | |
| model = BasicNet().to(device) | |
| ``` | |
| 构建PyTorch optimizer(优化器) | |
| ```python | |
| optimizer = optim.AdamW(model.parameters(), lr=1e-3) | |
| ``` | |
| 在最终创建一个简单的训练和评估循环之前,该循环对数据集执行一个完整的迭代并计算测试准确性: | |
| ```python | |
| model.train() | |
| for batch_idx, (data, target) in enumerate(train_loader): | |
| data, target = data.to(device), target.to(device) | |
| output = model(data) | |
| loss = F.nll_loss(output, target) | |
| loss.backward() | |
| optimizer.step() | |
| optimizer.zero_grad() | |
| model.eval() | |
| correct = 0 | |
| with torch.no_grad(): | |
| for data, target in test_loader: | |
| output = model(data) | |
| pred = output.argmax(dim=1, keepdim=True) | |
| correct += pred.eq(target.view_as(pred)).sum().item() | |
| print(f'Accuracy: {100. * correct / len(test_loader.dataset)}') | |
| ``` | |
| 通常从这里开始,可以将所有这些放入 python 脚本或在 Jupyter Notebook 上运行它。 | |
| 然而,你怎样让这些资源可用下通过分布式训练利用这些脚本在两个GPU或多台机器上去提升训练速度呢?仅仅通过`python myscript.py`只会在单个GPU上跑。这正是`torch.distributed`模块存在的意义 | |
| ## PyTorch分布式数据并行 | |
| 正如字面意思所示,`torch.distributed`是指在分布式上工作。这包括多节点,多机器下的单个GPU,或者单个系统下多GPU,或者两者混合的情况。 | |
| 为了转换为分布式设置,一些初始化设置必须首先定义,具体细节请看[DDP使用教程](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html) | |
| 首先要生命`setup`和`cleanup`函数。这将创建一个进程组,并且所有计算进程通过这个通信。 | |
| >"注意:对于本教程的这一部分,应该假定这些是在 python 脚本文件中发送的。稍后将讨论使用 Accelerate 的启动器,它消除了这种必要性" | |
| ```python | |
| import os | |
| import torch.distributed as dist | |
| def setup(rank, world_size): | |
| "Sets up the process group and configuration for PyTorch Distributed Data Parallelism" | |
| os.environ["MASTER_ADDR"] = 'localhost' | |
| os.environ["MASTER_PORT"] = "12355" | |
| # Initialize the process group | |
| dist.init_process_group("gloo", rank=rank, world_size=world_size) | |
| def cleanup(): | |
| "Cleans up the distributed environment" | |
| dist.destroy_process_group() | |
| ``` | |
| 最后一个疑问是,我怎样把我的数据和模型发送到另一个GPU上? | |
| 这正是` DistributedDataParallel`模型应用的地方,他会拷贝你的模型在每一个GPU上,并且当`loss.backward()`被调用进行反向传播的时候,所有这些模型副本的梯度将被平均/减少。这确保每个设备在优化器步骤后具有相同的权重。 | |
| 下面是我们的训练设置示例,重构为具有此功能的函数: | |
| >"注意:此处的rank是当前 GPU 与所有其他可用 GPU 相比的总体rank,这意味着它们的rank为0 -> n-1 | |
| ```python | |
| from torch.nn.parallel import DistributedDataParallel as DDP | |
| def train(model, rank, world_size): | |
| setup(rank, world_size) | |
| model = model.to(rank) | |
| ddp_model = DDP(model, device_ids=[rank]) | |
| optimizer = optim.AdamW(ddp_model.parameters(), lr=1e-3) | |
| # Train for one epoch | |
| model.train() | |
| for batch_idx, (data, target) in enumerate(train_loader): | |
| data, target = data.to(device), target.to(device) | |
| output = model(data) | |
| loss = F.nll_loss(output, target) | |
| loss.backward() | |
| optimizer.step() | |
| optimizer.zero_grad() | |
| cleanup() | |
| ``` | |
| 需要根据特定设备上的模型(soddp_model和 not model)声明优化器,以便正确计算所有梯度。 | |
| 最后,要运行脚本,PyTorch 有一个方便的`torchrun`命令行模块可以提供帮助。只需传入它应该使用的节点数以及要运行的脚本即可: | |
| ```bash | |
| torchrun --nproc_per_nodes=2 --nnodes=1 example_script.py | |
| ``` | |
| 上面的代码将在一台机器上的两个 GPU 上运行训练脚本,这是使用 PyTorch 仅执行分布式训练的情况。 | |
| 现在让我们谈谈 Accelerate,一个旨在让这个过程更丝滑并且能通过少量尝试帮助你的达到最优的库 | |
| ## 🤗 Accelerate | |
| [Accelerate](https://huggingface.co/docs/accelerate)是一个库,旨在让您执行我们刚才所做的事情,而无需大幅修改您的代码。除此之外,Accelerate 固有的数据pipeline还可以提高代码的性能。 | |
| 首先,让我们将刚刚执行的所有上述代码包装到一个函数中,以帮助我们直观地看到差异: | |
| ```python | |
| def train_ddp(rank, world_size): | |
| setup(rank, world_size) | |
| # Build DataLoaders | |
| transform = transforms.Compose([ | |
| transforms.ToTensor(), | |
| transforms.Normalize((0.1307), (0.3081)) | |
| ]) | |
| train_dset = datasets.MNIST('data', train=True, download=True, transform=transform) | |
| test_dset = datasets.MNIST('data', train=False, transform=transform) | |
| train_loader = torch.utils.data.DataLoader(train_dset, shuffle=True, batch_size=64) | |
| test_loader = torch.utils.data.DataLoader(test_dset, shuffle=False, batch_size=64) | |
| # Build model | |
| model = model.to(rank) | |
| ddp_model = DDP(model, device_ids=[rank]) | |
| # Build optimizer | |
| optimizer = optim.AdamW(ddp_model.parameters(), lr=1e-3) | |
| # Train for a single epoch | |
| model.train() | |
| for batch_idx, (data, target) in enumerate(train_loader): | |
| data, target = data.to(device), target.to(device) | |
| output = model(data) | |
| loss = F.nll_loss(output, target) | |
| loss.backward() | |
| optimizer.step() | |
| optimizer.zero_grad() | |
| # Evaluate | |
| model.eval() | |
| correct = 0 | |
| with torch.no_grad(): | |
| for data, target in test_loader: | |
| data, target = data.to(device), target.to(device) | |
| output = model(data) | |
| pred = output.argmax(dim=1, keepdim=True) | |
| correct += pred.eq(target.view_as(pred)).sum().item() | |
| print(f'Accuracy: {100. * correct / len(test_loader.dataset)}') | |
| ``` | |
| 接下来让我们谈谈 Accelerate 如何提供帮助。上面的代码有几个问题: | |
| 1. 该代码有点低效,因为n个dataloader是基于每个设备推送的 | |
| 2. 这些代码只能运行在多GPU下,因此需要特别注意再次在单个节点或 TPU 上运行。 | |
| Accelerate 通过 [Accelerator](https://huggingface.co/docs/accelerate/v0.12.0/en/package_reference/accelerator#accelerator)类解决上述问题。通过它,在比较单节点和多节点时,除了三行代码外,代码几乎保持不变,如下所示: | |
| ```python | |
| def train_ddp_accelerate(): | |
| accelerator = Accelerator() | |
| # Build DataLoaders | |
| transform = transforms.Compose([ | |
| transforms.ToTensor(), | |
| transforms.Normalize((0.1307), (0.3081)) | |
| ]) | |
| train_dset = datasets.MNIST('data', train=True, download=True, transform=transform) | |
| test_dset = datasets.MNIST('data', train=False, transform=transform) | |
| train_loader = torch.utils.data.DataLoader(train_dset, shuffle=True, batch_size=64) | |
| test_loader = torch.utils.data.DataLoader(test_dset, shuffle=False, batch_size=64) | |
| # Build model | |
| model = BasicModel() | |
| # Build optimizer | |
| optimizer = optim.AdamW(model.parameters(), lr=1e-3) | |
| # Send everything through `accelerator.prepare` | |
| train_loader, test_loader, model, optimizer = accelerator.prepare( | |
| train_loader, test_loader, model, optimizer | |
| ) | |
| # Train for a single epoch | |
| model.train() | |
| for batch_idx, (data, target) in enumerate(train_loader): | |
| output = model(data) | |
| loss = F.nll_loss(output, target) | |
| accelerator.backward(loss) | |
| optimizer.step() | |
| optimizer.zero_grad() | |
| # Evaluate | |
| model.eval() | |
| correct = 0 | |
| with torch.no_grad(): | |
| for data, target in test_loader: | |
| data, target = data.to(device), target.to(device) | |
| output = model(data) | |
| pred = output.argmax(dim=1, keepdim=True) | |
| correct += pred.eq(target.view_as(pred)).sum().item() | |
| print(f'Accuracy: {100. * correct / len(test_loader.dataset)}') | |
| ``` | |
| 借助这个对象,您的 PyTorch 训练循环现在已设置为可以在任何分布式设置上运行Accelerator。此代码仍然可以通过torchrunCLI 或通过 Accelerate 自己的 CLI 界面启动[accelerate launch](https://huggingface.co/docs/accelerate/v0.12.0/en/basic_tutorials/launch)。 | |
| 因此,现在使用 Accelerate 执行分布式训练并尽可能保持 PyTorch 原生代码相同就变得很简单。 | |
| 早些时候有人提到 Accelerate 还可以使 DataLoaders 更高效。这是通过自定义采样器实现的,它可以在训练期间自动将部分批次发送到不同的设备,从而允许一次知道一个数据副本,而不是一次将四个副本存入内存,具体取决于配置。与此同时,内存总量中只有原始数据集的一个完整副本。该数据集的子集在用于训练的所有节点之间进行拆分,从而允许在单个实例上训练更大的数据集,而不会使用内存爆炸 | |
| ### 使用notebook_launcher | |
| 之前有人提到您可以直接从 Jupyter Notebook 开始分布式代码。这来自 Accelerate 的notebook_launcher实用程序,它允许基于 Jupyter Notebook 内部的代码启动多 GPU 训练。 | |
| 使用它就像导入launcher一样简单: | |
| ```python | |
| from accelerate import notebook_launcher | |
| ``` | |
| 并传递我们之前声明的训练函数、要传递的任何参数以及要使用的进程数(例如 TPU 上的 8 个,或两个 GPU 上的 2 个)。以上两个训练函数都可以运行,但请注意,启动单次启动后,实例需要重新启动才能产生另一个 | |
| ```python | |
| notebook_launcher(train_ddp, args=(), num_processes=2) | |
| ``` | |
| 或者: | |
| ```python | |
| notebook_launcher(train_accelerate_ddp, args=(), num_processes=2) | |
| ``` | |
| ## 使用🤗 Trainer | |
| 终于我们来到了最高级的API-- -- the Hugging Face [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer). | |
| 这包含了尽可能多的训练,同时仍然能够在分布式系统上进行训练,而用户根本不需要做任何事情。 | |
| 首先我们需要导入Trainer: | |
| ```python | |
| from transformers import Trainer | |
| ``` | |
| 然后我们定义一些TrainingArguments来控制所有常用的超参数。trainer还通过词典工作,因此需要制作自定义整理功能。 | |
| 最后,我们将训练器子类化并编写我们自己的compute_loss. | |
| 之后,这段代码也可以在分布式设置上运行,而无需编写任何训练代码! | |
| ```python | |
| from transformers import Trainer, TrainingArguments | |
| model = BasicNet() | |
| training_args = TrainingArguments( | |
| "basic-trainer", | |
| per_device_train_batch_size=64, | |
| per_device_eval_batch_size=64, | |
| num_train_epochs=1, | |
| evaluation_strategy="epoch", | |
| remove_unused_columns=False | |
| ) | |
| def collate_fn(examples): | |
| pixel_values = torch.stack([example[0] for example in examples]) | |
| labels = torch.tensor([example[1] for example in examples]) | |
| return {"x":pixel_values, "labels":labels} | |
| class MyTrainer(Trainer): | |
| def compute_loss(self, model, inputs, return_outputs=False): | |
| outputs = model(inputs["x"]) | |
| target = inputs["labels"] | |
| loss = F.nll_loss(outputs, target) | |
| return (loss, outputs) if return_outputs else loss | |
| trainer = MyTrainer( | |
| model, | |
| training_args, | |
| train_dataset=train_dset, | |
| eval_dataset=test_dset, | |
| data_collator=collate_fn, | |
| ) | |
| ``` | |
| ```python | |
| trainer.train() | |
| ``` | |
| ```bash | |
| ***** Running training ***** | |
| Num examples = 60000 | |
| Num Epochs = 1 | |
| Instantaneous batch size per device = 64 | |
| Total train batch size (w. parallel, distributed & accumulation) = 64 | |
| Gradient Accumulation steps = 1 | |
| Total optimization steps = 938 | |
| ``` | |
| Epoch | 训练损失| 验证损失 | |
| |--|--|--| | |
| 1|0.875700|0.282633| | |
| 与上面带有 的示例类似notebook_launcher,这可以通过将其全部放入训练函数中再次完成: | |
| ```python | |
| def train_trainer_ddp(): | |
| model = BasicNet() | |
| training_args = TrainingArguments( | |
| "basic-trainer", | |
| per_device_train_batch_size=64, | |
| per_device_eval_batch_size=64, | |
| num_train_epochs=1, | |
| evaluation_strategy="epoch", | |
| remove_unused_columns=False | |
| ) | |
| def collate_fn(examples): | |
| pixel_values = torch.stack([example[0] for example in examples]) | |
| labels = torch.tensor([example[1] for example in examples]) | |
| return {"x":pixel_values, "labels":labels} | |
| class MyTrainer(Trainer): | |
| def compute_loss(self, model, inputs, return_outputs=False): | |
| outputs = model(inputs["x"]) | |
| target = inputs["labels"] | |
| loss = F.nll_loss(outputs, target) | |
| return (loss, outputs) if return_outputs else loss | |
| trainer = MyTrainer( | |
| model, | |
| training_args, | |
| train_dataset=train_dset, | |
| eval_dataset=test_dset, | |
| data_collator=collate_fn, | |
| ) | |
| trainer.train() | |
| notebook_launcher(train_trainer_ddp, args=(), num_processes=2) | |
| ``` | |
| ## 相关资源 | |
| 要了解有关 PyTorch 分布式数据并行性的更多信息,请查看[此处的文档](https://pytorch.org/docs/stable/distributed.html) | |
| 要了解有关 🤗 Accelerate 的更多信息,请查看[此处的文档](https://huggingface.co/docs/accelerate) | |
| 要了解有关 🤗 Transformer 的更多信息,请查看[此处的文档](https://huggingface.co/docs/transformers) | |
| <hr> | |
| >>>> 英文原文:[From PyTorch DDP to Accelerate to Trainer, mastery of distributed training with ease](https://huggingface.co/blog/pytorch-ddp-accelerate-transformers#%F0%9F%A4%97-accelerate) | |
| >>>> 译者:innovation64 (李洋) | |