TorchScript JIT
Thetorch.jit module provides functionality for creating TorchScript code from PyTorch code. TorchScript is a way to create serializable and optimizable models from PyTorch code.
Core Functions
script
nn.Module, compiling it to TorchScript.
The function or module to be scripted. Can be a function, method, or
nn.Module.Example inputs to the function or module. Used to infer types.
The compiled TorchScript code
trace
ScriptModule that will be optimized using just-in-time compilation.
A Python function or
torch.nn.Module that will be run with example_inputs.A tuple of example inputs that will be passed to the function while tracing.
Check if the same inputs run through traced code produce the same outputs.
Run the tracer in strict mode, which enforces that the entire computation is traceable.
The traced code
Optimization Functions
freeze
ScriptModule and attempts to inline the cloned module’s submodules, parameters, and attributes as constants.
The module to freeze.
Attributes to preserve in the frozen module.
Whether to run optimizations that assume floating point operations are associative.
optimize_for_inference
The module to optimize.
Other methods to optimize in addition to
forward.Serialization
save
ScriptModule or ScriptFunction to a file.
The module or function to save.
A file-like object or a string containing a file name.
load
ScriptModule or ScriptFunction previously saved with torch.jit.save.
A file-like object or a string containing a file name.
A function,
torch.device, string or a dict specifying how to remap storage locations.The loaded module or function
Type Annotations
annotate
the_value in TorchScript compiler.
Python type that should be passed to TorchScript compiler as type hint.
Value or expression to hint type for.
isinstance
Object to refine the type of.
Type to try to refine obj to.
Decorators and Context Managers
ignore
unused
export
Async Execution
fork
func and returns a reference to the value of the result of this execution.
A Python function or TorchScript function to execute asynchronously.
wait
torch.jit.Future[T] asynchronous task, returning the result of the task.
The future to wait on.
Example Usage
Best Practices
When to use script vs trace
When to use script vs trace
- Use
torch.jit.scriptwhen your code has control flow (if statements, loops) - Use
torch.jit.tracefor models with consistent execution paths scriptanalyzes Python code directly, whiletracerecords operations during execution
Performance optimization
Performance optimization
- Use
torch.jit.freezebefore deployment to inline constants - Apply
optimize_for_inferencefor inference-only models - Consider using
torch.compile(PyTorch 2.0+) as an alternative
Debugging TorchScript
Debugging TorchScript
- Use
check_trace=Truewhen tracing to validate correctness - Add type annotations for better error messages
- Use
@torch.jit.ignorefor code that doesn’t need to be compiled