Overview
The torch.utils.data package provides tools for data loading and processing. The key components are:
- DataLoader - Iterates over datasets with batching, shuffling, and multiprocessing
- Dataset - Abstract class for defining custom datasets
- Sampler - Defines sampling strategy for data loading
DataLoader
Combines a dataset and sampler to provide an iterable over the dataset.
torch.utils.data.DataLoader(dataset, batch_size=1, shuffle=False, sampler=None,
batch_sampler=None, num_workers=0, collate_fn=None,
pin_memory=False, drop_last=False, timeout=0,
worker_init_fn=None, multiprocessing_context=None,
generator=None, prefetch_factor=None,
persistent_workers=False, pin_memory_device='')
Parameters
Dataset from which to load the data
How many samples per batch to load
Set to True to have the data reshuffled at every epoch
Defines the strategy to draw samples from the dataset. Mutually exclusive with shuffle
Like sampler, but returns a batch of indices at a time. Mutually exclusive with batch_size, shuffle, sampler, and drop_last
How many subprocesses to use for data loading. 0 means data will be loaded in the main process
Merges a list of samples to form a mini-batch of Tensors. Used when using batched loading
If True, the data loader will copy Tensors into CUDA pinned memory before returning them
Set to True to drop the last incomplete batch, if the dataset size is not divisible by batch size
If positive, the timeout value (in seconds) for collecting a batch from workers
If not None, this will be called on each worker subprocess with the worker id as input
If not None, this RNG will be used by RandomSampler to generate random indexes
Number of batches loaded in advance by each worker. Default is 2 when num_workers > 0
If True, the data loader will not shut down worker processes after a dataset has been consumed once
Example - Basic Usage
from torch.utils.data import DataLoader, TensorDataset
import torch
# Create a simple dataset
data = torch.randn(100, 3, 32, 32)
labels = torch.randint(0, 10, (100,))
dataset = TensorDataset(data, labels)
# Create DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
# Iterate over batches
for batch_data, batch_labels in dataloader:
# batch_data shape: [32, 3, 32, 32]
# batch_labels shape: [32]
pass
Example - Multi-Process Loading
# Use multiple workers for faster data loading
dataloader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
num_workers=4, # Use 4 subprocesses
pin_memory=True, # Faster data transfer to GPU
prefetch_factor=2 # Each worker prefetches 2 batches
)
Example - Custom Collate Function
def custom_collate(batch):
"""Custom collate function for variable-length sequences."""
data = [item[0] for item in batch]
labels = [item[1] for item in batch]
# Pad sequences to same length
data = torch.nn.utils.rnn.pad_sequence(data, batch_first=True)
labels = torch.tensor(labels)
return data, labels
dataloader = DataLoader(dataset, batch_size=32, collate_fn=custom_collate)
get_worker_info()
Returns information about the current worker process.
torch.utils.data.get_worker_info()
Worker ID (0 to num_workers - 1)
Random seed for this worker
Copy of the dataset for this worker
Example
def worker_init_fn(worker_id):
worker_info = torch.utils.data.get_worker_info()
dataset = worker_info.dataset
# Configure each worker's dataset differently
dataset.start_idx = worker_id
dataset.num_workers = worker_info.num_workers
dataloader = DataLoader(
dataset,
num_workers=4,
worker_init_fn=worker_init_fn
)
default_collate()
Default collate function that converts a batch of samples to Tensors.
torch.utils.data.default_collate(batch)
Parameters
List of samples to collate
Behavior
- Tensors → Stack into a batch tensor
- NumPy arrays → Convert to tensors and stack
- Numbers → Convert to tensor
- Dicts → Collate each value and return dict
- Lists/Tuples → Collate each element
Example
from torch.utils.data import default_collate
batch = [
(torch.tensor([1, 2, 3]), 0),
(torch.tensor([4, 5, 6]), 1),
(torch.tensor([7, 8, 9]), 2)
]
data, labels = default_collate(batch)
# data shape: [3, 3]
# labels shape: [3]
Samplers
Samplers define the strategy to draw samples from a dataset.
SequentialSampler
Samples elements sequentially.
from torch.utils.data import SequentialSampler
sampler = SequentialSampler(dataset)
dataloader = DataLoader(dataset, sampler=sampler)
RandomSampler
Samples elements randomly.
from torch.utils.data import RandomSampler
sampler = RandomSampler(dataset, replacement=False)
dataloader = DataLoader(dataset, sampler=sampler)
If True, samples with replacement
Number of samples to draw. Defaults to length of dataset
SubsetRandomSampler
Samples elements randomly from a given list of indices.
from torch.utils.data import SubsetRandomSampler
indices = list(range(100))
sampler = SubsetRandomSampler(indices)
dataloader = DataLoader(dataset, sampler=sampler)
WeightedRandomSampler
Samples elements with given probabilities (weights).
from torch.utils.data import WeightedRandomSampler
# Sample more from minority classes
weights = [0.1, 0.9, 0.1, 0.9, ...] # One weight per sample
sampler = WeightedRandomSampler(
weights,
num_samples=len(weights),
replacement=True
)
dataloader = DataLoader(dataset, sampler=sampler)
BatchSampler
Wraps another sampler to yield a mini-batch of indices.
from torch.utils.data import BatchSampler, SequentialSampler
sampler = SequentialSampler(dataset)
batch_sampler = BatchSampler(sampler, batch_size=32, drop_last=False)
dataloader = DataLoader(dataset, batch_sampler=batch_sampler)
Distributed Sampling
DistributedSampler
Sampler that restricts data loading to a subset for distributed training.
from torch.utils.data.distributed import DistributedSampler
sampler = DistributedSampler(
dataset,
num_replicas=None, # Number of processes (auto-detected)
rank=None, # Rank of current process (auto-detected)
shuffle=True,
seed=0
)
dataloader = DataLoader(
dataset,
batch_size=32,
sampler=sampler
)
# In training loop
for epoch in range(num_epochs):
sampler.set_epoch(epoch) # Shuffle differently each epoch
for batch in dataloader:
...
Best Practices
# Recommended settings for GPU training
dataloader = DataLoader(
dataset,
batch_size=32,
num_workers=4, # Typically 4-8 workers
pin_memory=True, # Faster GPU transfer
persistent_workers=True, # Keep workers alive between epochs
prefetch_factor=2 # Prefetch batches per worker
)
Memory Efficiency
# Use set_to_none for memory efficiency
for data, target in dataloader:
optimizer.zero_grad(set_to_none=True) # More efficient
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
Deterministic Loading
import torch
# For reproducibility
torch.manual_seed(42)
generator = torch.Generator().manual_seed(42)
dataloader = DataLoader(
dataset,
batch_size=32,
shuffle=True,
generator=generator, # Use fixed generator
worker_init_fn=lambda worker_id: torch.manual_seed(42 + worker_id)
)
Common Patterns
Train/Val Split
from torch.utils.data import random_split
train_size = int(0.8 * len(dataset))
val_size = len(dataset) - train_size
train_dataset, val_dataset = random_split(dataset, [train_size, val_size])
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
val_loader = DataLoader(val_dataset, batch_size=32, shuffle=False)
Custom Worker Initialization
import numpy as np
def worker_init_fn(worker_id):
# Set different random seeds per worker
np.random.seed(np.random.get_state()[1][0] + worker_id)
dataloader = DataLoader(
dataset,
num_workers=4,
worker_init_fn=worker_init_fn
)
See Also