🤗 Transformers快速入门#

让我们用Hugging Face Transformers和Flower来构建一个联邦学习系统!

我们将利用Hugging Face技术,使用 Flower 在多个客户端上联邦训练语言模型。更具体地说,我们将对预先训练好的 Transformer 模型(distilBERT)进行微调,以便在 IMDB 评分数据集上进行序列分类。最终目标是检测电影评分是正面还是负面。

依赖关系#

要学习本教程,您需要安装以下软件包: datasetsevaluateflwrtorch`和 :code:`transformers。这可以通过 pip 来完成:

$ pip install datasets evaluate flwr torch transformers

标准Hugging Face工作流程#

处理数据#

为了获取 IMDB 数据集,我们将使用 Hugging Face 的 datasets 库。然后,我们需要对数据进行标记化,并创建 PyTorch 数据加载器,这些都将在 load_data 函数中完成:

import random
import torch
from datasets import load_dataset
from torch.utils.data import DataLoader
from transformers import AutoTokenizer, DataCollatorWithPadding

DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
CHECKPOINT = "distilbert-base-uncased"

def load_data():
    """Load IMDB data (training and eval)"""
    raw_datasets = load_dataset("imdb")
    raw_datasets = raw_datasets.shuffle(seed=42)
    # remove unnecessary data split
    del raw_datasets["unsupervised"]
    tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
    def tokenize_function(examples):
        return tokenizer(examples["text"], truncation=True)
    # We will take a small sample in order to reduce the compute time, this is optional
    train_population = random.sample(range(len(raw_datasets["train"])), 100)
    test_population = random.sample(range(len(raw_datasets["test"])), 100)
    tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
    tokenized_datasets["train"] = tokenized_datasets["train"].select(train_population)
    tokenized_datasets["test"] = tokenized_datasets["test"].select(test_population)
    tokenized_datasets = tokenized_datasets.remove_columns("text")
    tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
    data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
    trainloader = DataLoader(
        tokenized_datasets["train"],
        shuffle=True,
        batch_size=32,
        collate_fn=data_collator,
    )
    testloader = DataLoader(
        tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
    )
    return trainloader, testloader

训练和测试模型#

有了创建 trainloader 和 testloader 的方法后,我们就可以进行训练和测试了。这与任何 PyTorch 训练或测试循环都非常相似:

from evaluate import load as load_metric
from transformers import AdamW

def train(net, trainloader, epochs):
    optimizer = AdamW(net.parameters(), lr=5e-5)
    net.train()
    for _ in range(epochs):
        for batch in trainloader:
            batch = {k: v.to(DEVICE) for k, v in batch.items()}
            outputs = net(**batch)
            loss = outputs.loss
            loss.backward()
            optimizer.step()
            optimizer.zero_grad()
def test(net, testloader):
    metric = load_metric("accuracy")
    loss = 0
    net.eval()
    for batch in testloader:
        batch = {k: v.to(DEVICE) for k, v in batch.items()}
        with torch.no_grad():
            outputs = net(**batch)
        logits = outputs.logits
        loss += outputs.loss.item()
        predictions = torch.argmax(logits, dim=-1)
        metric.add_batch(predictions=predictions, references=batch["labels"])
    loss /= len(testloader.dataset)
    accuracy = metric.compute()["accuracy"]
    return loss, accuracy

创建模型本身#

要创建模型本身,我们只需使用 Hugging Face 的 AutoModelForSequenceClassification 加载预训练的 distillBERT 模型:

from transformers import AutoModelForSequenceClassification

net = AutoModelForSequenceClassification.from_pretrained(
        CHECKPOINT, num_labels=2
    ).to(DEVICE)

将示例联邦化#

创建 IMDBClient#

要将我们的示例联邦到多个客户端,我们首先需要编写 Flower 客户端类(继承自 flwr.client.NumPyClient)。这很容易,因为我们的模型是一个标准的 PyTorch 模型:

from collections import OrderedDict
import flwr as fl

class IMDBClient(fl.client.NumPyClient):
        def get_parameters(self, config):
            return [val.cpu().numpy() for _, val in net.state_dict().items()]
        def set_parameters(self, parameters):
            params_dict = zip(net.state_dict().keys(), parameters)
            state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
            net.load_state_dict(state_dict, strict=True)
        def fit(self, parameters, config):
            self.set_parameters(parameters)
            print("Training Started...")
            train(net, trainloader, epochs=1)
            print("Training Finished.")
            return self.get_parameters(config={}), len(trainloader), {}
        def evaluate(self, parameters, config):
            self.set_parameters(parameters)
            loss, accuracy = test(net, testloader)
            return float(loss), len(testloader), {"accuracy": float(accuracy)}

get_parameters 函数允许服务器获取客户端的参数。相反,:code:`set_parameters`函数允许服务器将其参数发送给客户端。最后,:code:`fit`函数在本地为客户端训练模型,:code:`evaluate`函数在本地测试模型并返回相关指标。

启动服务器#

现在我们有了实例化客户端的方法,我们需要创建服务器,以便汇总结果。使用 Flower,首先选择一个策略(这里我们使用 FedAvg,它将把全局模型参数定义为每轮所有客户端模型参数的平均值),然后使用 :code:`flwr.server.start_server`函数,就可以非常轻松地完成这项工作:

def weighted_average(metrics):
    accuracies = [num_examples * m["accuracy"] for num_examples, m in metrics]
    losses = [num_examples * m["loss"] for num_examples, m in metrics]
    examples = [num_examples for num_examples, _ in metrics]
    return {"accuracy": sum(accuracies) / sum(examples), "loss": sum(losses) / sum(examples)}

# Define strategy
strategy = fl.server.strategy.FedAvg(
    fraction_fit=1.0,
    fraction_evaluate=1.0,
    evaluate_metrics_aggregation_fn=weighted_average,
)

# Start server
fl.server.start_server(
    server_address="0.0.0.0:8080",
    config=fl.server.ServerConfig(num_rounds=3),
    strategy=strategy,
)

使用 weighted_average 函数是为了提供一种方法来汇总分布在客户端的指标(基本上,这可以让我们显示每一轮的平均精度和损失值)。

把所有东西放在一起#

现在我们可以使用:

fl.client.start_client(
    server_address="127.0.0.1:8080",
    client=IMDBClient().to_client()
)

他们就能连接到服务器,开始联邦训练。

If you want to check out everything put together, you should check out the full code example .

当然,这只是一个非常基本的示例,还可以添加或修改很多内容,只是为了展示我们可以如何简单地使用 Flower 联合Hugging Face的工作流程。

请注意,在本例中我们使用了 PyTorch,但也完全可以使用 TensorFlow