Minimize the performance impact of the device kernel profiling support when
the profiler plugin is not loaded.
Reduce the overheads of CUDA graph capturing, which increased in NCCL
2.26.2 for large graphs.
Fix the exchange of enhanced connection establishment (ECE) options to
address potential slowdowns on networks utilizing RoCE.
Test if cuMem host allocations work and if not, disable them. Enabled by
default since NCCL 2.24 if the CUDA driver version is at least 12.6, such
allocations rely on NUMA support, which is by default not available under
Docker. We recommend invoking Docker with "--cap-add SYS_NICE" to enable
it.
Fix an initialization error when running with NCCL_NET_GDR_C2C=1 on
multiple MNNVL domains with non-uniform network configurations across
nodes.
Fix the printing of sub-seconds in the debug log when using a custom
NCCL_DEBUG_TIMESTAMP_FORMAT setting.
Profiler improvements
* Add events for CUDA kernel start and end.
* Allow network plugins to generate profiling events
* Enable profiling on a per-operation basis, rather than per-communicator.
* Add support for graph capturing.
Add implicit launch order
* Allow to prevent deadlocks when using multiple NCCL communicators per
device by implicitly ordering NCCL operations using the host program
order. Disabled by default, set NCCL_LAUNCH_ORDER_IMPLICIT=1 to enable.
* Add a complementary mechanism to detect host threads racing to launch
to the same device. Enabled by default, set NCCL_LAUNCH_RACE_FATAL=0 to
disable.
Optimize the PAT algorithm
* Separate the computation and execution of PAT steps on different warps,
allowing to run up to 16 PAT steps in parallel to significantly
accelerate PAT and reduce its linear part.
Add support for setting QoS per communicator
* Add a new trafficClass field to the communicator configuration, to
allow the application to select a particular traffic class for a
given communicator. The meaning of the traffic class is
network-specific and should be set in accordance with the network
configuration.
* For the IB/RoCE plugin, existing config variables such as NCCL_IB_SL
and NCCL_IB_TC take precedence.
Allow to enable GPU Direct RDMA specifically on C2C platforms
* Disabled by default, set NCCL_NET_GDR_C2C=1 to enable.
Do not disable user buffer registration unless PXN is really used
* Only disable UB when a communicator has more than one rank per
node on any node.
RAS subsystem improvements
* Report operation counts separately for each collective operation type.
* Provide details about missing communicator ranks and reliably
distinguish ranks that are no longer a given communicator's members
(now reported as NOCOMM) from those that failed to respond.
Add support for timestamps to NCCL diagnostic messages
* On by default for WARN messages; NCCL_DEBUG_TIMESTAMP_LEVELS can be
used to enable them for other debug levels as well.
* The format can be changed using the NCCL_DEBUG_TIMESTAMP_FORMAT config
variable.
Reduce the memory usage with NVLink SHARP (NVLS)
* Potentially save hundreds of MBs of device memory, considering the
multicast buffer size granularity separately from the address alignment.
Update performance tuning for recent Intel CPUs
* Improve algorithm/protocol selection on recent CPUs such as Emerald
Rapids and Sapphire Rapids.
Improve channel scheduling when mixing LL and Simple operations.
* Make LL operations account for 4x more traffic to ensure LL and simple
operations complete at the same time.
Refactor the plugin code
* Clean up and harmonize the support code across the network, tuner,
and profiler plugins.
Add support for comment lines (starting with #) in the nccl.conf file
* Issue #1540.
Make user buffer registration problems print an INFO instead of a WARN.
Drop support for network plugin interface version 5.
Fix a race condition with split-shared communicators
* NCCL could hang during connection setup if multiple communicators
were grouped together that share resources.
Fix a performance regression when using NCCL_CROSS_NIC=1
* NCCL would unnecessarily alternate rings, breaking the GPU-NIC
associations.
Make GID index detection code more resilient
* Dynamic GID detection code was giving up too soon if the
detected index was not available (e.g., wasn't mapped to the
container's sysfs).
* Issues #1538, #1573.
Fix a race condition with non-blocking operation
* Fix issue when creating a non-blocking communicator after a non-
blocking collective operation on another communicator.
Fix shared memory usage on recent Blackwell GPUs.
* Issues NVIDIA/nccl-tests#287, NVIDIA/nccl-tests#291, #1637.
Fix an error with NIC fusion and IB SHARP when recreating communicators
* Disable the unloading of network plugins
Make the auto-merge failures in the NIC fusion non-fatal
* This could happen when trying to merge IB and RoCE devices.
Fixes to ncclCommAbort
* Fix hangs due to the progress thread spinning indefinitely on the
network progress.
* Reduce the abort time by up to two orders of magnitude.
Fix a crash when libnccl.so was dynamically unloaded
* The RAS subsystem was missing a clean-up handler.
Fix a hang if the network plugin's test() call returns an error.
Fix a hang on heterogeneous architectures
* Ensure we harmonize the tuning to avoid different tuning choices,
causing a hang.
Fix double-free on failed ncclCommInitRank and ncclCommFinalize.
Fix a potential list traversal bug during a group launch of multiple
communicators
* Issue #1599.
Unify the handling of NCCL configuration variables
* Under rare circumstances, some variables specified in the config file
could be ignored.
Add support for CUDA 12.0, drop Kepler (sm_35).
Support for H100 features.
Make socket code more robust and protected. Solves #555.
Improve performance on large CUDA graphs, reducing dependencies.
Reduce inter-socket bandwidth on AMD CPUs to favor better paths.
Various fixes to ncclCommAbort.
Make service thread polling resistant to EINTR.
Compile with profiling API by default.
Extend NVTX instrumentation with call arguments.
Fix crash with CollnetChain on some node topologies
Fix hang when interleaving the capture of different graphs
Fix hang during init in multi-threaded mode
Fix potential data corruption with LL128 protocol on unaligned buffers.
Fix CPU usage during preconnect
Fixes double-free in the error path for ncclCommInitAll
Workaround hang on H100 with Ring/LL128 on 2 GPUs.
Optimize CUDA graph launch; avoid launching a CPU callback for
intra-node operations.
Simplify kernel common code to improve the latency of send/recv
operations.
Strengthen CUDA streams semantics.
Change NET API to v6, to add dmabuf support.
Add ncclGetLastError() function.
Add ncclRemoteError code and use it for remote network errors.
Support the use of a different NCCL_NET parameter per communicator.
Add support for SHM and P2P transfers using cudaMemcpy.