Skip to content

The Setup

  • Boilerplate
  • MNIST
  • How steps 1-4 are expressed in the example
  • How steps 5-6 are often ignored in literature
  • Implementing steps 5,6
  • TSNE, UMAP, SHAP, etc.
  • example of deepfake detection

Boilerplate

We need to install python, get the dependencies, set up a project structure and finally ensure everything is working. So to keep it simple, we just want to install our library and use it reliably. We test by defining and printing some default paths, cause why not.

  • install git
  • install uv
sudo apt install git curl
curl -LsSf https://astral.sh/uv/install.sh | sh
  • init project
mkdir thesis && cd thesis
uv init --lib --verbose .
uv sync

# Activate virtual environment / Recognized automatically by VS Code
source .venv/bin/activate

echo "Using python version  : $(cat .python-version)"
echo "Created pyproject.toml: $(cat pyproject.toml | head -n 1)"
echo "Created virtual env   : $(ls ./.venv/bin/activate)"
  • make note of python version used, use latest if needed
python -c "import sys; print(sys.version)"
  • install torch, pandas, matplotlib
uv add torch torchvision pandas matplotlib
  • create data, config, outputs, experiments directories
mkdir data config outputs experiments
  • Replace content of src/thesis/__init__.py with structure we will use very often.
from pathlib import Path

class Paths:
    source = Path(".")
    data = Path("data")
    config = Path("config")
    experiments = Path("experiments")
    outputs = Path("outputs")
  • activate virtual env
source .venv/bin/activate
  • test that module is installed and default paths can be used
python -c "import thesis; print(f'{thesis.Paths.data = }')"
# thesis.Paths.data = PosixPath('data')

MNIST

The basicest of basic examples:

Basic MNIST Example

wget -O src/mnist.py https://raw.githubusercontent.com/pytorch/examples/refs/heads/main/mnist/main.py
python src/mnist.py

# 100.0%
# 100.0%
# 100.0%
# 100.0%
# Train Epoch: 1 [0/60000 (0%)] Loss: 2.293147
# Train Epoch: 1 [640/60000 (1%)]   Loss: 1.627885
# Train Epoch: 1 [1280/60000 (2%)]  Loss: 0.847082
# Train Epoch: 1 [1920/60000 (3%)]  Loss: 0.669613
# Train Epoch: 1 [2560/60000 (4%)]  Loss: 0.469336
# Train Epoch: 1 [3200/60000 (5%)]  Loss: 0.594092
# Train Epoch: 1 [3840/60000 (6%)]  Loss: 0.299630

At this point, it should already start training, with output looking like the above.

Let's have a deeper look at the example.

It is 141 lines of code, with a lot of arument parsing:

import argparse
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torchvision import datasets, transforms
from torch.optim.lr_scheduler import StepLR


class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1 = nn.Conv2d(1, 32, 3, 1)
        self.conv2 = nn.Conv2d(32, 64, 3, 1)
        self.dropout1 = nn.Dropout(0.25)
        self.dropout2 = nn.Dropout(0.5)
        self.fc1 = nn.Linear(9216, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = F.max_pool2d(x, 2)
        x = self.dropout1(x)
        x = torch.flatten(x, 1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.dropout2(x)
        x = self.fc2(x)
        output = F.log_softmax(x, dim=1)
        return output


def train(args, model, device, train_loader, optimizer, epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):
        data, target = data.to(device), target.to(device)
        optimizer.zero_grad()
        output = model(data)
        loss = F.nll_loss(output, target)
        loss.backward()
        optimizer.step()
        if batch_idx % args.log_interval == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                100. * batch_idx / len(train_loader), loss.item()))
            if args.dry_run:
                break


def test(model, device, test_loader):
    model.eval()
    test_loss = 0
    correct = 0
    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device)
            output = model(data)
            test_loss += F.nll_loss(output, target, reduction='sum').item()  # sum up batch loss
            pred = output.argmax(dim=1, keepdim=True)  # get the index of the max log-probability
            correct += pred.eq(target.view_as(pred)).sum().item()

    test_loss /= len(test_loader.dataset)

    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))


def main():
    # Training settings
    parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
    parser.add_argument('--batch-size', type=int, default=64, metavar='N',
                        help='input batch size for training (default: 64)')
    parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
                        help='input batch size for testing (default: 1000)')
    parser.add_argument('--epochs', type=int, default=14, metavar='N',
                        help='number of epochs to train (default: 14)')
    parser.add_argument('--lr', type=float, default=1.0, metavar='LR',
                        help='learning rate (default: 1.0)')
    parser.add_argument('--gamma', type=float, default=0.7, metavar='M',
                        help='Learning rate step gamma (default: 0.7)')
    parser.add_argument('--no-accel', action='store_true',
                        help='disables accelerator')
    parser.add_argument('--dry-run', action='store_true',
                        help='quickly check a single pass')
    parser.add_argument('--seed', type=int, default=1, metavar='S',
                        help='random seed (default: 1)')
    parser.add_argument('--log-interval', type=int, default=10, metavar='N',
                        help='how many batches to wait before logging training status')
    parser.add_argument('--save-model', action='store_true',
                        help='For Saving the current Model')
    args = parser.parse_args()

    use_accel = not args.no_accel and torch.accelerator.is_available()

    torch.manual_seed(args.seed)

    if use_accel:
        device = torch.accelerator.current_accelerator()
    else:
        device = torch.device("cpu")

    train_kwargs = {'batch_size': args.batch_size}
    test_kwargs = {'batch_size': args.test_batch_size}
    if use_accel:
        accel_kwargs = {'num_workers': 1,
                        'persistent_workers': True,
                       'pin_memory': True,
                       'shuffle': True}
        train_kwargs.update(accel_kwargs)
        test_kwargs.update(accel_kwargs)

    transform=transforms.Compose([
        transforms.ToTensor(),
        transforms.Normalize((0.1307,), (0.3081,))
        ])
    dataset1 = datasets.MNIST('../data', train=True, download=True,
                       transform=transform)
    dataset2 = datasets.MNIST('../data', train=False,
                       transform=transform)
    train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
    test_loader = torch.utils.data.DataLoader(dataset2, **test_kwargs)

    model = Net().to(device)
    optimizer = optim.Adadelta(model.parameters(), lr=args.lr)

    scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)
    for epoch in range(1, args.epochs + 1):
        train(args, model, device, train_loader, optimizer, epoch)
        test(model, device, test_loader)
        scheduler.step()

    if args.save_model:
        torch.save(model.state_dict(), "mnist_cnn.pt")


if __name__ == '__main__':
    main()

Randomness

torch.manual_seed(args.seed)

Datasets

transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.1307,), (0.3081,))
])

dataset1 = datasets.MNIST('../data', train = True,  download = True,  transform = transform)
dataset2 = datasets.MNIST('../data', train = False, download = False, transform = transform)

train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)
test_loader  = torch.utils.data.DataLoader(dataset2, **test_kwargs)

Model

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()
        self.conv1    = nn.Conv2d(1, 32, 3, 1)
        self.conv2    = nn.Conv2d(32, 64, 3, 1)
        self.dropout1 = nn.Dropout(0.25)
        self.dropout2 = nn.Dropout(0.5)
        self.fc1      = nn.Linear(9216, 128)
        self.fc2      = nn.Linear(128, 10)

    def forward(self, x):
        x = self.conv1(x)
        x = F.relu(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = F.max_pool2d(x, 2)
        x = self.dropout1(x)
        x = torch.flatten(x, 1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.dropout2(x)
        x = self.fc2(x)
        output = F.log_softmax(x, dim=1)
        return output

# Random initialization
model = Net().to(device)

# When training (for example, consider behaviour of dropout)
model.train()

# When testing
model.eval()

# Save checkpoint
torch.save(model.state_dict(), "mnist_cnn.pt")

# Load checkpoint TODO

Metrics

for data, target in test_loader:
    output   = model(data)
    pred     = output.argmax(dim=1, keepdim=True)
    correct += pred.eq(target.view_as(pred)).sum().item()

acc = 100. * correct / len(test_loader.dataset)
print('Accuracy: {acc}%')

Training

transform    = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,)) ])
dataset1     = datasets.MNIST('../data', train=True, download=False, transform=transform)
train_loader = torch.utils.data.DataLoader(dataset1,**train_kwargs)

model = Net().to(device)
model.train()

optimizer = optim.Adadelta(model.parameters(), lr=args.lr)
scheduler = StepLR(optimizer, step_size=1, gamma=args.gamma)

for epoch in range(1, args.epochs + 1):
    train(args, model, device, train_loader, optimizer, epoch)
    scheduler.step()


def train(args, model, device, train_loader, optimizer, epoch):

    # Crucial (consider behaviour of dropout)!!
    model.train()

    # Get batches of samples and ground truth
    for batch_idx, (data, target) in enumerate(train_loader):

        # Put on gpu if needed
        data, target = data.to(device), target.to(device)

        # Reset previous gradients
        optimizer.zero_grad()

        # Forward pass
        output = model(data)

        # Compute loss
        loss = F.nll_loss(output, target)

        # Propogate gradients
        loss.backward()
        optimizer.step()

        # Log metrics
        if batch_idx % args.log_interval == 0:
            print(f"Loss: {loss}")

Device specific

use_accel = not args.no_accel and torch.accelerator.is_available()

if use_accel:
    device = torch.accelerator.current_accelerator()
    accel_kwargs = {'num_workers'       : 1,
                    'persistent_workers': True,
                    'pin_memory'        : True,
                    'shuffle'           : True}
    train_kwargs.update(accel_kwargs)
    test_kwargs.update(accel_kwargs)
else:
    device = torch.device("cpu")

Options, Hyperparams

    parser = argparse.ArgumentParser(description='PyTorch MNIST Example')
    parser.add_argument('--batch-size',      type=int,            default=64,   metavar='N',  help='input batch size for training (default: 64)')
    parser.add_argument('--test-batch-size', type=int,            default=1000, metavar='N',  help='input batch size for testing (default: 1000)')
    parser.add_argument('--epochs',          type=int,            default=14,   metavar='N',  help='number of epochs to train (default: 14)')
    parser.add_argument('--lr',              type=float,          default=1.0,  metavar='LR', help='learning rate (default: 1.0)')
    parser.add_argument('--gamma',           type=float,          default=0.7,  metavar='M',  help='Learning rate step gamma (default: 0.7)')
    parser.add_argument('--seed',            type=int,            default=1,    metavar='S',  help='random seed (default: 1)')
    parser.add_argument('--log-interval',    type=int,            default=10,   metavar='N',  help='how many batches to wait before logging training status')
    parser.add_argument('--save-model',      action='store_true', help='For Saving the current Model')
    parser.add_argument('--no-accel',        action='store_true', help='disables accelerator')
    parser.add_argument('--dry-run',         action='store_true', help='quickly check a single pass')

How steps 1-4 are expressed in the example

  1. Find shapes of inputs and outputs
  2. Run model on single example
  3. Compute metrics
  4. Establish baseline

How steps 5-6 are often ignored in literature

  1. Investigate issues, i.e. find root causes
  2. Expand literature survey

Implementing steps 5,6

  1. Investigate issues, i.e. find root causes
  2. Expand literature survey

TSNE, UMAP, SHAP, etc.

Example of deepfake detection