This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch.
Some of the code here will be included in upstream Pytorch eventually.
The intent of Apex is to make up-to-date utilities available to users as quickly as possible.
Installation
Each apex.contrib module requires one or more install options other than --cpp_ext and --cuda_ext.
Note that contrib modules do not necessarily support stable PyTorch releases, some of them might only be compatible with nightlies.
Fused kernels required to use apex.optimizers.FusedAdam.
Fused kernels required to use apex.normalization.FusedLayerNorm and apex.normalization.FusedRMSNorm.
Fused kernels that improve the performance and numerical stability of apex.parallel.SyncBatchNorm.
Fused kernels that improve the performance of apex.parallel.DistributedDataParallel and apex.amp.
DistributedDataParallel, amp, and SyncBatchNorm will still be usable, but they may be slower.
[Experimental] Windows
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" . may work if you were able to build Pytorch from source
on your system. A Python-only build via pip install -v --no-cache-dir . is more likely to work. If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment.
Custom C++/CUDA Extensions and Install Options
If a requirement of a module is not met, then it will not be built.
Introduction
This repository holds NVIDIA-maintained utilities to streamline mixed precision and distributed training in Pytorch. Some of the code here will be included in upstream Pytorch eventually. The intent of Apex is to make up-to-date utilities available to users as quickly as possible.
Installation
Each
apex.contribmodule requires one or more install options other than--cpp_extand--cuda_ext. Note that contrib modules do not necessarily support stable PyTorch releases, some of them might only be compatible with nightlies.Containers
NVIDIA PyTorch Containers are available on NGC: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch. The containers come with all the custom extensions available at the moment.
See the NGC documentation for details such as:
From Source
To install Apex from source, we recommend using the nightly Pytorch obtainable from https://github.com/pytorch/pytorch.
The latest stable release obtainable from https://pytorch.org should also work.
We recommend installing
Ninjato make compilation faster.Linux
For performance and full functionality, we recommend installing Apex with CUDA and C++ extensions using environment variables:
Using Environment Variables (Recommended)
To reduce the build time, parallel building can be enabled:
When CPU cores or memory are limited, the
--paralleloption is generally preferred over--threads. See pull#1882 for more details.Using Command-Line Flags (Legacy Method)
The traditional command-line flags are still supported:
Python-Only Build
APEX also supports a Python-only build via:
A Python-only build omits:
apex.optimizers.FusedAdam.apex.normalization.FusedLayerNormandapex.normalization.FusedRMSNorm.apex.parallel.SyncBatchNorm.apex.parallel.DistributedDataParallelandapex.amp.DistributedDataParallel,amp, andSyncBatchNormwill still be usable, but they may be slower.[Experimental] Windows
pip install -v --disable-pip-version-check --no-cache-dir --no-build-isolation --config-settings "--build-option=--cpp_ext" --config-settings "--build-option=--cuda_ext" .may work if you were able to build Pytorch from source on your system. A Python-only build viapip install -v --no-cache-dir .is more likely to work.If you installed Pytorch in a Conda environment, make sure to install Apex in that same environment.
Custom C++/CUDA Extensions and Install Options
If a requirement of a module is not met, then it will not be built.
apex_CAPEX_CPP_EXT=1--cpp_extamp_CAPEX_CUDA_EXT=1--cuda_extsyncbnAPEX_CUDA_EXT=1--cuda_extfused_layer_norm_cudaAPEX_CUDA_EXT=1--cuda_extapex.normalizationmlp_cudaAPEX_CUDA_EXT=1--cuda_extscaled_upper_triang_masked_softmax_cudaAPEX_CUDA_EXT=1--cuda_extgeneric_scaled_masked_softmax_cudaAPEX_CUDA_EXT=1--cuda_extscaled_masked_softmax_cudaAPEX_CUDA_EXT=1--cuda_extfused_weight_gradient_mlp_cudaAPEX_CUDA_EXT=1--cuda_extpermutation_search_cudaAPEX_PERMUTATION_SEARCH=1--permutation_searchapex.contrib.sparsitybnpAPEX_BNP=1--bnpapex.contrib.groupbnxentropyAPEX_XENTROPY=1--xentropyapex.contrib.xentropyfocal_loss_cudaAPEX_FOCAL_LOSS=1--focal_lossapex.contrib.focal_lossfused_index_mul_2dAPEX_INDEX_MUL_2D=1--index_mul_2dapex.contrib.index_mul_2dfused_adam_cudaAPEX_DEPRECATED_FUSED_ADAM=1--deprecated_fused_adamapex.contrib.optimizersfused_lamb_cudaAPEX_DEPRECATED_FUSED_LAMB=1--deprecated_fused_lambapex.contrib.optimizersfast_layer_normAPEX_FAST_LAYER_NORM=1--fast_layer_normapex.contrib.layer_norm. different fromfused_layer_normfmhalibAPEX_FMHA=1--fmhaapex.contrib.fmhafast_multihead_attnAPEX_FAST_MULTIHEAD_ATTN=1--fast_multihead_attnapex.contrib.multihead_attntransducer_joint_cudaAPEX_TRANSDUCER=1--transducerapex.contrib.transducertransducer_loss_cudaAPEX_TRANSDUCER=1--transducerapex.contrib.transducercudnn_gbn_libAPEX_CUDNN_GBN=1--cudnn_gbnapex.contrib.cudnn_gbnpeer_memory_cudaAPEX_PEER_MEMORY=1--peer_memoryapex.contrib.peer_memorynccl_p2p_cudaAPEX_NCCL_P2P=1--nccl_p2papex.contrib.nccl_p2pfast_bottleneckAPEX_FAST_BOTTLENECK=1--fast_bottleneckpeer_memory_cudaandnccl_p2p_cuda,apex.contrib.bottleneckfused_conv_bias_reluAPEX_FUSED_CONV_BIAS_RELU=1--fused_conv_bias_reluapex.contrib.conv_bias_reludistributed_adam_cudaAPEX_DISTRIBUTED_ADAM=1--distributed_adamapex.contrib.optimizersdistributed_lamb_cudaAPEX_DISTRIBUTED_LAMB=1--distributed_lambapex.contrib.optimizers_apex_nccl_allocatorAPEX_NCCL_ALLOCATOR=1--nccl_allocatorapex.contrib.nccl_allocator_apex_gpu_direct_storageAPEX_GPU_DIRECT_STORAGE=1--gpu_direct_storageapex.contrib.gpu_direct_storageYou can also build all contrib extensions at once by setting
APEX_ALL_CONTRIB_EXT=1.