Strive236

Strive236

生命的起伏要认可
github
zhihu

ML2022Spring HW3

Introduction#

这次的 HW 是图像分类任务,对食物数据集 food11 进行分类,具体的任务要求如下:

image

需要达到的目标和 Hints 分别为:

image

Simple#

只需要 run 它的原始代码即可,结果如下:

image

image

Medium#

进行 Training Augmentation 并将训练时间延长(用更大的 n_epoch)。

Training Augmentation 具体操作如下:

train_tfm = transforms.Compose([
    # Resize the image into a fixed shape (height = width = 128)
    transforms.Resize((128, 128)),
    # You may add some transforms here.
    transforms.RandomChoice([
        transforms.RandomRotation((-30,30)),
        transforms.RandomHorizontalFlip(p=0.5),
        transforms.RandomVerticalFlip(p=0.5),
        transforms.ColorJitter(brightness=(0.5,1.5), contrast=(0.5, 1.5), saturation=(0.5,1.5), hue=(-0.25, 0.25)),
        transforms.RandomInvert(p=0.5),
        transforms.RandomAffine(degrees=(-30,30), translate=(0.1, 0.1), scale=(0.8, 1.2), shear=(-30, 30)),
        transforms.Grayscale(num_output_channels=3),
    ]),
    # ToTensor() should be the last one of the transforms.
    transforms.ToTensor(),
])

具体介绍如下:

  • RandomRotation ((-30,30)) 随机旋转图像。(-30, 30):旋转角度范围(-30 度到 + 30 度之间随机选择)。
  • RandomHorizontalFlip (p=0.5),以 50% 概率水平翻转图像。p=0.5:执行概率(0.5 表示 50%)。
  • RandomVerticalFlip (p=0.5),以 50% 概率垂直翻转图像。p 含义同上。
  • ColorJitter (brightness=(0.5,1.5), contrast=(0.5, 1.5), saturation=(0.5,1.5), hue=(-0.25, 0.25)),随机调整颜色属性(亮度、对比度、饱和度、色调)。brightness=(0.5, 1.5):亮度缩放范围(0.5 倍~1.5 倍)。contrast=(0.5, 1.5):对比度调整范围(0.5 倍~1.5 倍)。saturation=(0.5, 1.5):饱和度调整范围(0.5 倍~1.5 倍)。hue=(-0.25, 0.25):色调偏移范围(-0.25~+0.25,对应色相环的 - 90 度~+90 度)。
  • RandomInvert (p=0.5),以 50% 概率反色(颜色取反,如黑变白、红变青)。p 含义同上。
  • RandomAffine (degrees=(-30,30), translate=(0.1, 0.1), scale=(0.8, 1.2), shear=(-30, 30)),随机仿射变换(旋转、平移、缩放、剪切)。degrees=(-30, 30):旋转角度范围。translate=(0.1, 0.1):水平和垂直方向最大平移比例(10% 图像尺寸)。scale=(0.8, 1.2):缩放范围(0.8 倍~1.2 倍)。shear=(-30, 30):剪切角度范围(-30 度~+30 度)。剪切(Shear)是一种线性几何变换,通过倾斜图像的一部分来模拟 “倾斜变形” 效果。
  • Grayscale (num_output_channels=3),将图像转为灰度图,但保留 3 通道(RGB 格式,每通道值相同)。num_output_channels=3:输出通道数(3 表示生成 3 通道灰度图,兼容模型输入)。将彩色图像(RGB)转换为灰度图,本质是通过加权平均合并三个通道的亮度信息,生成单通道图像。

Training Time 延长到 90 个 epoch,结果如下:

image

1747311545845

需要注意的是,查看 gpu 利用率发现 gpu 利用率很低,问题应该是出在每次读取数据时候的 tranform 增强还有磁盘 IO 的时间太长了,导致训练时间很慢,需要调一下 DataLoader 的并发。

Train 此处使用了 12 个并发(取决于图像增强的复杂度),取 persistent_workers=True,避免重复创建 / 销毁进程,减少开销。Test 使用了 8 个并发。改善后训练一个 epoch 的时间大大缩短,此处使用的是本地电脑跑的,设备为 RTX4070laptop。

同时还将 batchsize 变成了 128。

Strong#

首先先将模型结构进行了修改,学习了一下 resnet18 和 resnet34,自己在 resnet34 的基础上小改了一下:

class BasicBlock(nn.Module):
    def __init__(self, in_channels, out_channels, stride=1):
        super().__init__()
        self.conv1 = nn.Conv2d(in_channels, out_channels, 3, stride, 1, bias=False)
        self.bn1 = nn.BatchNorm2d(out_channels)

        self.conv2 = nn.Conv2d(out_channels, out_channels, 3, 1, 1, bias=False)
        self.bn2 = nn.BatchNorm2d(out_channels)

        self.relu = nn.ReLU(inplace=True)

        self.downsample = None
        if stride != 1 or in_channels != out_channels:
            self.downsample = nn.Sequential(
                nn.Conv2d(in_channels, out_channels, 1, stride, 0),
                nn.BatchNorm2d(out_channels)
            )

    def forward(self, x):
        identity = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)

        if self.downsample is not None:
            identity = self.downsample(identity)

        out += identity
        out = self.relu(out)
        
        return out

class Classifier(nn.Module):
    def __init__(self):
        super(Classifier, self).__init__()
        # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)
        # torch.nn.MaxPool2d(kernel_size, stride, padding)
        # input 維度 [3, 128, 128]
        # 初始卷积层
        self.conv1 = nn.Conv2d(3, 64, kernel_size=5, stride=2, padding=2, bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)

        self.block1 = BasicBlock(64, 64, 1) # [64, 64, 64]

        self.block2 = BasicBlock(64, 64) # [64, 64, 64]

        self.block3 = BasicBlock(64, 64) # [64, 64, 64]

        self.block4 = BasicBlock(64, 128, 2) # [128, 32, 32]

        self.block5 = BasicBlock(128, 128) # [128, 32, 32]

        self.block6 = BasicBlock(128, 128) # [128, 32, 32]

        self.block7 = BasicBlock(128, 128) # [128, 32, 32]

        self.block8 = BasicBlock(128, 256, 2) # [256, 16, 16]

        self.block9 = BasicBlock(256, 256) # [256, 16, 16]

        self.block10 = BasicBlock(256, 256) # [256, 16, 16]

        self.block11 = BasicBlock(256, 256) # [256, 16, 16]

        self.block12 = BasicBlock(256, 256) # [256, 16, 16]

        self.block13 = BasicBlock(256, 256) # [256, 16, 16]

        self.block14 = BasicBlock(256, 512, 2) # [512, 8, 8]

        self.block15 = BasicBlock(512, 512) # [512, 8, 8]

        self.block16 = BasicBlock(512, 512) # [512, 8, 8]

        self.avgpool = nn.AdaptiveAvgPool2d((1, 1))

        self.fc = nn.Linear(512, 11)

    def forward(self, x):
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.block1(out)
        out = self.block2(out)
        out = self.block3(out)
        out = self.block4(out)
        out = self.block5(out)
        out = self.block6(out)
        out = self.block7(out)
        out = self.block8(out)
        out = self.block9(out)
        out = self.block10(out)
        out = self.block11(out)
        out = self.block12(out)
        out = self.block13(out)
        out = self.block14(out)
        out = self.block15(out)
        out = self.block16(out)

        out = self.avgpool(out)

        out = out.view(out.size()[0], -1)
        return self.fc(out)

然后进行了 Cross-Validation 和 Ensemble:

Cross-Validation 采用了五折交叉验证,将原来的训练集和验证集合并后再进行的五折交叉验证:

# "cuda" only when GPUs are available.
device = "cuda" if torch.cuda.is_available() else "cpu"

# The number of training epochs and patience.
n_epochs = 200
patience = 50 # If no improvement in 'patience' epochs, early stop

import numpy as np
from sklearn.model_selection import KFold

from torch.utils.tensorboard import SummaryWriter
import datetime

# 初始化5折交叉验证
n_folds = 5
kf = KFold(n_splits=n_folds, shuffle=True, random_state=42)

# 加载完整训练集(用于交叉验证)
train_set = FoodDataset(os.path.join(_dataset_dir, "training"), tfm=train_tfm)
valid_set = FoodDataset(os.path.join(_dataset_dir, "validation"), tfm=train_tfm)

# 合并数据集
combined_files = train_set.files + valid_set.files
full_dataset = FoodDataset(path="", tfm=train_tfm, files=combined_files)

oof_preds = np.zeros(len(full_dataset))  # 存储OOF预测结果
oof_labels = np.zeros(len(full_dataset)) # 存储真实标签

# 存储所有基模型(用于后续集成)
base_models = [] 

timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
log_dir = f"runs/food_classification_{timestamp}"
writer = SummaryWriter()

for fold, (train_idx, val_idx) in enumerate(kf.split(train_set)):
    print(f"\n====== Fold {fold+1}/{n_folds} ======")
    
    # 划分训练集和验证子集
    train_subset = Subset(train_set, train_idx)
    val_subset = Subset(train_set, val_idx)
    
    # DataLoader
    train_loader = DataLoader(
        train_subset, 
        batch_size=batch_size, 
        shuffle=True, 
        num_workers=12,
        pin_memory=True,
        persistent_workers=True
    )
    val_loader = DataLoader(
        val_subset,
        batch_size=batch_size,
        shuffle=False,
        num_workers=8,
        pin_memory=True,
        persistent_workers=True
    )

    # 每折独立初始化模型和优化器
    model = Classifier().to(device)
    optimizer = torch.optim.Adam(model.parameters(), lr=0.0003, weight_decay=1e-5)
    criterion = nn.CrossEntropyLoss()
    
    # 早停相关变量(每折独立)
    fold_best_acc = 0
    stale = 0

    # 训练循环(保持原有逻辑)
    for epoch in range(n_epochs):
        # ---------- Training ----------
        model.train()
        train_loss, train_accs = [], []
        
        for batch in tqdm(train_loader, desc=f"Epoch {epoch+1}"):
            imgs, labels = batch
            imgs, labels = imgs.to(device), labels.to(device)
            
            logits = model(imgs)
            loss = criterion(logits, labels)
            
            optimizer.zero_grad()
            loss.backward()
            torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10)
            optimizer.step()
            
            acc = (logits.argmax(dim=-1) == labels).float().mean()
            train_loss.append(loss.item())
            train_accs.append(acc.item())
        
        # 打印训练信息
        avg_loss = np.mean(train_loss)
        avg_acc = np.mean(train_accs)

        # 写入TensorBoard
        writer.add_scalar(f'Fold_{fold}/Train/Loss', avg_loss, epoch)
        writer.add_scalar(f'Fold_{fold}/Train/Accuracy', avg_acc, epoch)

        print(f"[ Train | {epoch+1:03d}/{n_epochs:03d} ] loss = {avg_loss:.5f}, acc = {avg_acc:.5f}")

        # ---------- Validation ----------
        model.eval()
        val_loss, val_accs, val_preds = [], [], []
        val_labels = []  # 累积所有验证批次的标签

        for batch in tqdm(val_loader, desc="Validating"):
            imgs, labels = batch
            imgs = imgs.to(device)
            labels_np = labels.numpy()
            val_labels.extend(labels_np)  # 累积标签
            
            with torch.no_grad():
                logits = model(imgs)
                preds = logits.argmax(dim=-1).cpu().numpy()
            
            loss = criterion(logits, labels.to(device))
            val_loss.append(loss.item())
            val_accs.append((preds == labels_np).mean())
            val_preds.extend(preds)

        # 记录OOF预测和标签
        oof_preds[val_idx] = np.array(val_preds)
        oof_labels[val_idx] = np.array(val_labels) 

        # 打印验证信息
        avg_val_loss = np.mean(val_loss)
        avg_val_acc = np.mean(val_accs)

        # 写入TensorBoard
        writer.add_scalar(f'Fold_{fold}/Val/Loss', avg_val_loss, epoch)
        writer.add_scalar(f'Fold_{fold}/Val/Accuracy', avg_val_acc, epoch)

        print(f"[ Valid | {epoch+1:03d}/{n_epochs:03d} ] loss = {avg_val_loss:.5f}, acc = {avg_val_acc:.5f}")

        # 早停逻辑(每折独立)
        if avg_val_acc > fold_best_acc:
            print(f"Fold {fold} best model at epoch {epoch}")
            torch.save(model.state_dict(), f"fold{fold}_best.ckpt")
            fold_best_acc = avg_val_acc
            stale = 0
        else:
            stale += 1
            if stale > patience:
                print(f"Early stopping at epoch {epoch}")
                break
    
    # 保存当前折的模型
    base_models.append(model)

# 关闭TensorBoard的writer
writer.close()

# ---------- 后处理 ----------
# 计算OOF准确率
oof_acc = (oof_preds == oof_labels).mean()
print(f"\n[OOF Accuracy] {oof_acc:.4f}")

保存五个基模型后在 test 部分使用 ensemble:

# 集成预测(软投票法)
all_preds = []
for model in base_models:
    model.eval()
    fold_preds = []
    for data, _ in test_loader:  # 保持和原test_loader一致
        with torch.no_grad():
            logits = model(data.to(device))
            # 保存每个模型的原始logits(概率),而不是直接argmax
            fold_preds.append(logits.cpu().numpy())
    # 合并当前模型的所有batch预测结果
    fold_preds = np.concatenate(fold_preds, axis=0)
    all_preds.append(fold_preds)

# 软投票:平均所有模型的logits后取argmax
all_preds = np.stack(all_preds)  # shape: (n_models, n_samples, n_classes)
prediction = all_preds.mean(axis=0).argmax(axis=1)  # shape: (n_samples,)

最后结果如下:

1747886535400

已经非常接近 boss 了

由于时间问题就没有做 boss,以后有时间再回来补吧。

两个 report_problem 分别是数据增强和设计残差网络,在 medium 和 strong 完成的过程里有,就没有额外再说了。

加载中...
此文章数据所有权由区块链加密技术和智能合约保障仅归创作者所有。