Skip to main content

Installation

PyTorch can be installed via package managers or built from source. Choose the installation method that best fits your needs.

Prerequisites

Before installing PyTorch, ensure you have:
  • Python 3.10 or later
  • pip or conda package manager
  • (Optional) CUDA-capable GPU for GPU acceleration

Quick Install

For most users, we recommend installing the pre-built binaries using pip or conda. Visit https://pytorch.org/get-started/locally/ to get the exact command for your system configuration.

Install with pip

# Install PyTorch for CPU
pip install torch torchvision torchaudio

Install with conda

# Install PyTorch for CPU
conda install pytorch torchvision torchaudio cpuonly -c pytorch

Verify Installation

After installation, verify that PyTorch is working correctly:
import torch

# Print PyTorch version
print(f"PyTorch version: {torch.__version__}")

# Create a simple tensor
x = torch.rand(5, 3)
print(f"Random tensor:\n{x}")

Building from Source

Building from source requires significant disk space (10+ GB) and compilation time (30-60 minutes). Only do this if you need the latest features or custom builds.

Prerequisites for Building

If you are installing from source, you will need:
  • Python 3.10 or later
  • A C++20 compatible compiler (gcc 11.3.0+ on Linux, Clang on macOS)
  • At least 10 GB of free disk space
  • 30-60 minutes for the initial build
On Windows, you’ll need Visual Studio 2022 or Visual Studio Build Tools. The build tools can be downloaded from https://visualstudio.microsoft.com/visual-cpp-build-tools/

Step 1: Set up environment

# Create and activate conda environment
source <CONDA_INSTALL_DIR>/bin/activate
conda create -y -n pytorch_build python=3.11
conda activate pytorch_build

Step 2: Clone PyTorch source

git clone https://github.com/pytorch/pytorch
cd pytorch

# If updating an existing checkout
git submodule sync
git submodule update --init --recursive

Step 3: Install dependencies

# Install build dependencies
pip install --group dev

# Install Intel MKL
pip install mkl-static mkl-include

# CUDA only: Add LAPACK support for GPU
# Specify your CUDA version (e.g., 12.4)
.ci/docker/common/install_magma_conda.sh 12.4

# Optional: Install triton for torch.compile support
export USE_XPU=1  # Only for Intel GPU
make triton

Step 4: Build PyTorch

# Set CMake prefix path for conda environment
export CMAKE_PREFIX_PATH="${CONDA_PREFIX:-'$(dirname $(which conda))/../'}:${CMAKE_PREFIX_PATH}"

# Install PyTorch
python -m pip install --no-build-isolation -v -e .

Build Options

You can customize the build with environment variables:
VariableDescriptionDefault
USE_CUDAEnable CUDA support1 (if CUDA detected)
USE_ROCMEnable ROCm support0
USE_XPUEnable Intel GPU support0
USE_DISTRIBUTEDEnable distributed training1
BUILD_TESTBuild C++ tests1
USE_MKLDNNUse oneDNN for CPU acceleration1
To disable CUDA support when building from source, set USE_CUDA=0 before running the install command:
export USE_CUDA=0
python -m pip install --no-build-isolation -v -e .

Docker Installation

You can also use pre-built Docker images:
# Pull and run the latest PyTorch image with GPU support
docker run --gpus all --rm -ti --ipc=host pytorch/pytorch:latest

# Run Python in the container
python -c "import torch; print(torch.cuda.is_available())"
PyTorch uses shared memory for multiprocessing. Use --ipc=host or --shm-size to increase shared memory size when using Docker.

Platform-Specific Notes

NVIDIA Jetson Platforms

Python wheels for Jetson Nano, TX1/TX2, Xavier NX/AGX, and AGX Orin are available at the NVIDIA Developer Forums. They require JetPack 4.2 and above.

Windows with Visual Studio

On Windows, PyTorch supports Visual Studio 2019/2022 and Ninja as build generators. If ninja.exe is detected in PATH, Ninja will be used as the default generator.

Troubleshooting

This typically means PyTorch’s C extensions failed to build or install correctly. Try:
  1. Reinstalling with pip install --force-reinstall torch
  2. Building from source with verbose output: python -m pip install --no-build-isolation -v -e .
  3. Checking that all dependencies are installed
If you encounter out-of-memory errors:
  1. Reduce batch size
  2. Use gradient checkpointing for large models
  3. Enable mixed precision training with torch.cuda.amp
  4. Clear cache with torch.cuda.empty_cache()
Initial builds take 30-60 minutes. Subsequent rebuilds are much faster. To speed up:
  1. Use MAX_JOBS environment variable to control parallelism: export MAX_JOBS=4
  2. Disable tests if not needed: export BUILD_TEST=0
  3. Use ccache for C++ compilation caching

Next Steps

Quick Start Guide

Now that PyTorch is installed, learn the basics with our hands-on quick start guide