What is LibTorch?
LibTorch is the C++ distribution of PyTorch, providing a research and production ready library for tensor computation and dynamic neural networks with strong GPU acceleration and fast CPU performance. The PyTorch C++ API can be roughly divided into five parts:- ATen: The foundational tensor and mathematical operation library on which all else is built
- Autograd: Augments ATen with automatic differentiation
- C++ Frontend: High level constructs for training and evaluation of machine learning models
- TorchScript: An interface to the TorchScript JIT compiler and interpreter
- C++ Extensions: A means of extending the Python API with custom C++ and CUDA routines
Why Use the C++ API?
The PyTorch C++ API is ideal for several use cases:Production Deployment
- No Python Dependency: Deploy models without Python runtime overhead
- Better Performance: Lower latency and higher throughput for inference
- Multi-threading: Native C++ threading without GIL constraints
- Embedded Systems: Run on resource-constrained devices
High-Performance Computing
- Direct Hardware Access: Fine-grained control over GPU operations
- Custom Kernels: Write CUDA kernels and integrate seamlessly
- Optimized Execution: Minimize overhead for performance-critical applications
Research Applications
- Novel Architectures: Implement custom layers and operations
- Low-level Control: Access to ATen’s tensor internals
- C++ Libraries: Integrate with existing C++ codebases
API Components
ATen: Core Tensor Library
ATen provides the fundamentalTensor class with hundreds of operations. Tensors dynamically dispatch to CPU or GPU implementations based on their type.
at:: namespace. See the Tensor API documentation for details.
Autograd: Automatic Differentiation
The autograd system records operations on tensors to form a computational graph. Callingbackward() performs reverse-mode differentiation.
Tensors created with
torch:: factory functions are differentiable, while those from at:: are not. Use torch:: namespace for autograd support.C++ Frontend: High-Level API
The C++ frontend provides a PyTorch-like interface for building and training models:- Module System: Hierarchical models like
torch.nn.Module - Standard Library: Pre-built layers (convolutions, RNNs, batch normalization)
- Optimizers: SGD, Adam, RMSprop, and more
- Data Loading: Parallel data loading like
torch.utils.data.DataLoader - Serialization: Save and load model checkpoints
TorchScript: Model Export
Load Python-trained models in C++ for production inference:API Stability
Next Steps
Install LibTorch
Download and set up the LibTorch distribution for your platformInstallation Guide →
Learn Tensor Operations
Understand the core Tensor API and ATen operationsTensor API →
Use Autograd
Enable automatic differentiation for your computationsAutograd API →
Build Models
Create neural network modules using the C++ frontendModule API →