Improvements for GB200 systems
* Optimize the network performance by alternating the direction of the
rings and the NIC to GPU assignment across communicators to limit
unnecessary sharing.
* Fix the detection of C2C links in case GPU Direct RDMA is disabled
between a GPU and a NIC.
* Fix PXN support on MNNVL systems, where NCCL would try (and fail) to
share regular host memory across multiple nodes.
* Fix P2C (PXN over C2C), which is now preferred over regular PXN. This
support is currently preliminary and is disabled by default; use
NCCL_PXN_C2C=1 to enable.
Further reduce the overheads of CUDA graph capturing, which increased in
NCCL 2.26.2 for large graphs.
Optimize the network performance on DGX B200 systems by adjusting the
bandwidths provided to the graph search algorithm.
Enable fp8 reductions in symmetric kernels on Blackwell with CUDA 12.8.
Restore the plugin name handling logic to make it possible to specify a
path to the plugin (Issue #1732).
Restore the ability to change NCCL_COLLNET_ENABLE during execution
(Issue #1741).
Add an example tuner plugin with CSV-based overrides.
Remove an x86 dependency from the example profiler.
Symmetric memory API and symmetric kernels
* Redesign from the ground up, enabling major latency and bandwidth
improvements.
* Add new API calls to register user-allocated memory among communicator
ranks into a NCCL window: ncclCommWindowRegister() and
ncclCommWindowDeregister(). The calls currently support symmetric
registration for P2P and NVLS, and require VMM memory buffers (i.e.,
CUMEM must be operational).
* Implement specialized kernels taking advantage of symmetrically
registered memory, with performance gains expected particularly for
small to medium message sizes.
* The kernels support 32 bit floating point types and smaller, and sum as
the reduction operator, with no more than one collective operation per
group.
* Floating point summation is always done in fp32 accumulators (with the
exception of fp8 on NVLS, where it uses fp16 inside the switch). Thus,
the accuracy with fp8 and fp16 data types should be much improved.
* This initial implementation supports non-network communicators only (P2P
and NVLS transports).
* To explore this functionality users need to use the new memory
registration API calls with the NCCL_WIN_COLL_SYMMETRIC flag and all
ranks of a communicator must pass buffers at the same offset in the same
registration when invoking a collective NCCL operation.
Add support for DGX Spark.
Add support for DirectNIC (CX8) to the internal IB plugin.
Add a new ncclCommShrink() API call
* It is a non-collective call similar to ncclCommSplit(), which makes it
possible to exclude some (possibly unresponsive) ranks from the parent
communicator.
Add support for loading multiple network plugins
* This enables the creation of generic containers that can work across a
range of providers.
* Allow NCCL_NET_PLUGIN to accept a comma-separated list of plugins to
load.
NVLink SHARP (NVLS) improvements
* Implement NVLS+IB SHARP support for AllGather and ReduceScatter with
user buffer registration. This improves performance and reduces the
number of CTAs needed to achieve peak bandwidth.
* Gracefully fall back by default to other transports if NVLS
initialization fails (the old behavior of returning an error code from a
NCCL call can be preserved by setting NCCL_NVLS_ENABLE=1).
* Decrease the NVLS channel count to 24 on Blackwell systems with multiple
NVLink domains per communicator.
* Enable fine-tuning of NCCL behavior per communicator using new
"ncclConfig_t" members "collnetEnable", "CTAPolicy", and "nvlsCTAs".
Profiler improvements
* Extend the init function by adding communicator name, comm id (hash),
rank, number of ranks, number of nodes, and the NCCL log function to the
argument list. This makes the name and the comm id available to all
events in the communicator without explicitly passing them to each
individual event. Add the communicator id and rank to the profiler trace
filename. Now, the communicator name can be set via a new "ncclConfig_t"
member "commName".
* Improve the accuracy of the GPU kernel events by providing GPU-generated
timestamps for the start and stop of every NCCL operation.
* Harmonize proxy events, removing overlaps between ProxyOp and ProxyStep
states.
* Add support for network-defined event updates (through
"recordEventState").
* Report the correct number of channels used by every collective/p2p
operation (used to be set to nMaxChannels for collectives and absent for
p2ps).
* Fix the logic on proxyCtrl Idle/Active events (Issue #1162).
* Fix an issue where the network proxy profiler could lose track of an
event identifier (Issue #1682).
* Improve the backward compatibility with plugins older than v4.
* Ensure that the work counters are 0-initialized.
* Fix a potential race condition in the network profiler that could result
in an event being linked to a wrong parent.
MNNVL improvements
* Increase to 16 the number of NICs used to communicate between MNNVL
domains on GB200 systems, to optimize the performance of collective
operations.
* Add support for more complex MNNVL topologies with up to 32 NICs per
node.
* If the MNNVL fabric initialization was unsuccessful, NCCL will now fail
by default, so as to avoid inadvertently falling back to a potentially
much slower network transport. Such failures are typically due to a
misconfigured IMEX support on the system. To continue without MNNVL,
restart the job with NCCL_MNNVL_ENABLE=0.
* Fix a potential hang in alltoall-like communication patterns at a scale
of over 80 ranks.
* Make NCCL_P2P_DISABLE=1 imply NCCL_MNNVL_ENABLE=0 (so the latter no
longer needs to be specified on MNNVL systems).
* Fix an initialization failure when NCCL_TOPO_FILE is used on MNNVL
systems.
* Fix the graph search to exclude non-local NICs.
* Fix the SHM transport to use fabric handles on MNNVL systems.
NIC Fusion improvements
* Disable the creation of fused NICs for physical devices that haven't
been merged.
* Flatten multiple ports to a single PCI device within the internal IB
plugin and reparent dual-port NICs under the first PCI parent. If the
parent is not a PCI switch, PCI devices for fused NICs won't be
duplicated.
* Route traffic on GB200-CX8 systems through DirectNIC, not the host
interface.
Improve support for platforms with C2C connectivity (e.g., GB200)
* Enable GPUDirect RDMA for the NICs by default.
* Add support for P2C (PXN over C2C) and the LL128 protocol.
Extend NCCL fault tolerance in multithreaded scenarios
* Support the creation of multiple nonblocking communicators within a
single group and polling in parallel for the completion using multiple
threads (one per communicator).
Enable ncclImplicitOrderLaunch for CUDA 12.9+
* This can potentially speed up NCCL_IMPLICIT_LAUNCH_ORDER.
Improve the netSocket transport latency and control
* Provide finer control over the size of the socket send/receive buffers,
the task size, and the number of sockets that a single peer can open.
* Add support for the inlining of small messages behind the header when
using multiple sockets per connection.
Improve the readability of the CPU affinity in the debug output
* Print it as a range string rather than a bitmask.
Fix a potential race condition in graph execution
* A contention could arise when mixing graph and non-graph execution.
Improve PXN connection code
* Avoid duplicate and unused connections.
RAS fixes
* Fix a memory corruption at job termination time in case of a previously
failed initialization of a RAS socket connection.
* Fix a race condition leading to a crash when generating a RAS report
during communicator initialization (Issues #1669, #1718).
* Fix a potential race condition when gathering data for a RAS status
report.
Fix a potential memory corruption in ncclCommSplit()
* Memory could get corrupted when resource sharing was in use and the size
of the NVLink domain in the new communicator was smaller than in the old
one.
Fix asynchronous graph upload
* Fix a small memory leak.
* Fix oversychronization.
Add a check for out-of-memory conditions in ncclMemAlloc()
Clean up the NCCL socket code
* accept() will retry also if just reading the magic failed (Issue #1613).
* connect() will retry also if poll() did not return a POLLOUT event
(Issue #1618).
* Add error checking in a few instances (Issue #1539).
* Fix the loop condition in ncclFindInterfaceMatchSubnet() (Issue #1574).
* Clean up the debug output, downgrading WARN messages to INFO in
non-critical cases, and printing the peer's address where relevant.
Switch NCCL_DEBUG_FILE to line buffering
* This should help avoid mixed-up partial output lines in multithreaded
cases.
Other minor fixes
* Improve the checks for buffer overflows in the graph code (Issue #1585).
* Extend logging and state clearing to all four events in the internal IB
plugin (Issue #1650).
* Fix the error path in case IB communication is not ready (Issue #1489).
* Add ECE logging for IB fabric.
* Fix various minor issues in the graph module (Issue #1635).
* Clean up the debug output in the graph code, downgrading WARN messages
to INFO in non-critical cases.
* Add a missing argument to a directSend() call (Issue #1628).
* Remove duplicate code in sendProxySetup() (Issue #1420).
* Fix the order of arguments of cudaDeviceCanAccessPeer() (Issue #1507).
* Fix compiler warnings with GCC 14.
* Fix a typo in a comment (Issue #1236).
Profiler improvements
* Add events for CUDA kernel start and end.
* Allow network plugins to generate profiling events
* Enable profiling on a per-operation basis, rather than per-communicator.
* Add support for graph capturing.
Add implicit launch order
* Allow to prevent deadlocks when using multiple NCCL communicators per
device by implicitly ordering NCCL operations using the host program
order. Disabled by default, set NCCL_LAUNCH_ORDER_IMPLICIT=1 to enable.
* Add a complementary mechanism to detect host threads racing to launch
to the same device. Enabled by default, set NCCL_LAUNCH_RACE_FATAL=0 to
disable.
Optimize the PAT algorithm
* Separate the computation and execution of PAT steps on different warps,
allowing to run up to 16 PAT steps in parallel to significantly
accelerate PAT and reduce its linear part.
Add support for setting QoS per communicator
* Add a new trafficClass field to the communicator configuration, to
allow the application to select a particular traffic class for a
given communicator. The meaning of the traffic class is
network-specific and should be set in accordance with the network
configuration.
* For the IB/RoCE plugin, existing config variables such as NCCL_IB_SL
and NCCL_IB_TC take precedence.
Allow to enable GPU Direct RDMA specifically on C2C platforms
* Disabled by default, set NCCL_NET_GDR_C2C=1 to enable.
Do not disable user buffer registration unless PXN is really used
* Only disable UB when a communicator has more than one rank per
node on any node.
RAS subsystem improvements
* Report operation counts separately for each collective operation type.
* Provide details about missing communicator ranks and reliably
distinguish ranks that are no longer a given communicator's members
(now reported as NOCOMM) from those that failed to respond.
Add support for timestamps to NCCL diagnostic messages
* On by default for WARN messages; NCCL_DEBUG_TIMESTAMP_LEVELS can be
used to enable them for other debug levels as well.
* The format can be changed using the NCCL_DEBUG_TIMESTAMP_FORMAT config
variable.
Reduce the memory usage with NVLink SHARP (NVLS)
* Potentially save hundreds of MBs of device memory, considering the
multicast buffer size granularity separately from the address alignment.
Update performance tuning for recent Intel CPUs
* Improve algorithm/protocol selection on recent CPUs such as Emerald
Rapids and Sapphire Rapids.
Improve channel scheduling when mixing LL and Simple operations.
* Make LL operations account for 4x more traffic to ensure LL and simple
operations complete at the same time.
Refactor the plugin code
* Clean up and harmonize the support code across the network, tuner,
and profiler plugins.
Add support for comment lines (starting with #) in the nccl.conf file
* Issue #1540.
Make user buffer registration problems print an INFO instead of a WARN.
Drop support for network plugin interface version 5.
Fix a race condition with split-shared communicators
* NCCL could hang during connection setup if multiple communicators
were grouped together that share resources.
Fix a performance regression when using NCCL_CROSS_NIC=1
* NCCL would unnecessarily alternate rings, breaking the GPU-NIC
associations.
Make GID index detection code more resilient
* Dynamic GID detection code was giving up too soon if the
detected index was not available (e.g., wasn't mapped to the
container's sysfs).
* Issues #1538, #1573.
Fix a race condition with non-blocking operation
* Fix issue when creating a non-blocking communicator after a non-
blocking collective operation on another communicator.
Fix shared memory usage on recent Blackwell GPUs.
* Issues NVIDIA/nccl-tests#287, NVIDIA/nccl-tests#291, #1637.
Fix an error with NIC fusion and IB SHARP when recreating communicators
* Disable the unloading of network plugins
Make the auto-merge failures in the NIC fusion non-fatal
* This could happen when trying to merge IB and RoCE devices.
Fixes to ncclCommAbort
* Fix hangs due to the progress thread spinning indefinitely on the
network progress.
* Reduce the abort time by up to two orders of magnitude.
Fix a crash when libnccl.so was dynamically unloaded
* The RAS subsystem was missing a clean-up handler.
Fix a hang if the network plugin's test() call returns an error.
Fix a hang on heterogeneous architectures
* Ensure we harmonize the tuning to avoid different tuning choices,
causing a hang.
Fix double-free on failed ncclCommInitRank and ncclCommFinalize.
Fix a potential list traversal bug during a group launch of multiple
communicators
* Issue #1599.
Unify the handling of NCCL configuration variables
* Under rare circumstances, some variables specified in the config file
could be ignored.
Add Blackwell/SM100 support
* Add compilation for sm100
* Add graph search speeds for Blackwell
* Optimize graph search to converge on large NVLink domains
* Limit NVLS heads to 32
* Increase various limits to fit large NVLink domains
* Add extra checks for IMEX setup, needed for MNNVL
* Increase MAXCHANNELS to 64
Extend NVTX instrumentation to track NCCL communicators
* Add communicator ID to NVTX traces to allow for correlation
between ranks.
RAS fixes
Network user buffer support for collectives
* Leverage user buffer registration to achieve zero-copy
inter-node communications for Ring, NVLS and Collnet
Add RAS subsystem
* Create a RAS thread keeping track of all NCCL communicators.
* Add a ncclras tool contacting the RAS thread and getting a
report.
Add fp8 support
* Add support for e5m2 and e4m3 8-bit floating point operations.
* Use Tree/PAT algorithms when possible for better numerical
stability.
Add NIC fusion
* Add a NET API to ask the network plugin to fuse a set of
interfaces together.
* Fuse multiple NICs under the same PCI switch as a single,
larger NIC.
Socket connection failure retry
* Retry in case of socket connection failure (unreachable host)
* Avoid "Software caused connection abort" errors on retries
QP connection failure retry
* Retry in case of IB QP connection failure during ibv_modify_qp.
NET API improvements
* Allow plugins to force a flush in case data and completion
ordering is not guaranteed.
* Indicate when completion is not needed (e.g. for the LL128
protocol), allowing plugins to skip generating a completion.
* Allow for full offload of allgather operations when using one
GPU per node.
NCCL_ALGO/NCCL_PROTO strict enforcement
* Extend NCCL_ALGO/NCCL_PROTO syntax to be able to specify
ALGO/PROTO filters for each collective operation.
* Strictly enforce the ALGO/PROTO filters, no longer fall back
on the ring algorithm when the filtering leaves no option and
error out instead.
Enable CUMEM host allocations
* Use cumem functions for host memory allocation by default.
Improved profiler plugin API
* Avoid dependencies with NCCL includes.
* Add information on whether the buffer is registered or not
Adjust PAT tuning
* Improve transition between PAT and ring at scale.
Fix hangs when running with different CPU architectures
* Detect when we use a mix of GPU architectures
* Ensure Algo/Proto decisions are made based on that unified
state.
Fix FD leak in UDS
* Fix a leak when mapping buffers intra-node with cumem IPCs.
Fix crash when mixing buffer registration and graph buffer registration.
* Separate local and graph registration to avoid crashes when we free
buffers.
Fix user buffer registration with dmabuf
* Make ncclSend/ncclRecv communication with buffer registration functional
on network plugins relying on dmabuf for buffer registration.
Fix crash in IB code caused by uninitialized fields.
Fix non-blocking ncclSend/ncclRecv
* Fix case where ncclSend/ncclRecv would return ncclSuccess in non-blocking
mode even though the operation was not enqueued onto the stream.
* Issue #1495
Various compiler tweaks and fixes
* PR #758
Fix typo in ncclTopoPrintGraph
* Issue #1468
Add scalable init API
* Add new ncclCommInitRankScalable to allow for passing multiple
unique IDs to the init function.
* Spreads the load onto multiple bootstrap roots, allowing for
constant bootstrap time.
* Requires multiple ranks to create a unique ID, and the CPU-side
ID exchange code to call allgather[v] instead of broadcast.
Accelerate init bootstrap operations
* Reduce the number of calls to allgather.
* Allow roots to reply early to ranks when information is already
available.
* Add an option to use ncclNet instead of sockets to perform
bootstrap allgather operations.
Add PAT algorithms for Allgather and ReduceScatter
* Parallel Aggregated Trees, variation of Bruck algorithm.
* Logarithmic number of network steps for small sizes at scale.
* Only supports one rank per node at the moment.
Add support for registered buffers for intra-node communication.
* Allow registered user buffers to be accessed directly intra-node
* Avoids extra copies in algorithms which permit it, saving
memory bandwidth and helping with compute overlap.
Add profiler plugin API
* New plugin API for profiling
* Supports various levels of profiling, with a hierarchy.
Asynchronous graph allocation
* Make calls to cudaMalloc and cudaMemcpy during graph allocation
asynchronous.
* Significantly speeds up graph capture.
Use fatal IB asynchronous events to stop network operation
* Avoids many other error messages
* Only fatal errors are affected; potentially transient errors
(e.g. port down) do not cause an immediate stop.
Set P2P level to PXB on AMD CPUs when using more than 2 GPUs per node
* P2P would cause a significant performance degradation when using
many GPUs, and therefore many interleaved data flows.
* Disable P2P through the CPU when we have 3+ GPUs per node; keep it
enabled when we only have 2 GPUs.
Improve the init logs to report the real NCCL function.
* Make the log report ncclCommInitRank or ncclCommSplit, rather than
the generic ncclCommInitRankFunc.
Add a parameter to set the location of the user configuration file.
* Add NCCL_CONF_FILE environment variable to set where the user's
configuration file resides.
Increase default IB timeout
* Increase IB timeout value from 18 to 20.
* Should help avoid fatal errors on large RoCE systems.
Add new check for nvidia peermem
* On linux kernels 6.6+, /sys/kernel/mm/memory_peers is no longer
present; check for /sys/module/nvidia_peermem/version instead.
Fix old performance regression when mixing small and large operations.
* Improves distribution of work on channels.
Fix crash when NUMA IDs are equal to -1.
* Can happen when a NIC is a virtual NIC, or when linux doesn't
know which NUMA node a device is attached to
* Issue NVIDIA/nccl-tests#233
Fix tree graph search when NCCL_CROSS_NIC is set to 1.
* Would force NCCL to use the balanced_tree pattern, thereby
disabling LL128 on platforms with 1 GPU+1 NIC per PCI switch.
* Would also try to use alternate rings even though it was not
needed.
Compiler tweaks and fixes
* PR #1177
* PR #1228
Fix stack smash
* PR #1325
Fixes for multi-node NVLink + IB operation
Coverity fixes and comments.
Add local user buffer registration for NVLink SHARP.
Add tuning plugin support.
Increase net API to v7 to allow for device-side packet reordering;
remove support for v4 plugins.
Add support for RoCE ECE.
Add support for C2C links.
Better detect SHM allocation failures to avoid crash with Bus Error.
Fix missing thread unlocks in bootstrap (Fixes#936).
Disable network flush by default on H100.
Move device code from src/collectives/device to src/device.
Add support for IB SHARP to NVLS (NVLink SHARP algorithm).
Add NVLS+Tree algorithm.
Add support for memory management using cuMem* functions.
Use all NICs for Send/Receive operations on systems with more than
one NIC per GPU (#804).
Add ncclCommSplit primitive, with resource sharing option in config.
Fix alltoallv hang (#788)
Increase number of channels on H100 when we're not limited by NVLink.
Improve error reporting in case of IB failure, printing local and
remote ID (#779).
Add build option to allow compilation against RDMA includes instead
of dynamically loading IB verbs symbols (#802).
Fix context creation for progress thread (#803).
NET/IB: add option to use multiple QPs in round-robin mode.
Fix tree performance issue when NVB is disabled on HCM topologies.
Add support for CUDA 12.0, drop Kepler (sm_35).
Support for H100 features.
Make socket code more robust and protected. Solves #555.
Improve performance on large CUDA graphs, reducing dependencies.
Reduce inter-socket bandwidth on AMD CPUs to favor better paths.
Various fixes to ncclCommAbort.
Make service thread polling resistant to EINTR.
Compile with profiling API by default.
Extend NVTX instrumentation with call arguments.
Add network communication through another GPU connected with NVLink
(PXN).
Add aggregation of messages coming from different local GPUs through
PXN and going to the same destination.
Add new v5 plugin API with grouped receives and tags.
Add compat for v4 plugins.
Add naming of NCCL threads to help debugging.
Fix NVLink detection and avoid data corruption when some NVLinks are
down.
Add support for Relaxed Ordering for IB.
Add profiling and timing infrastructure.
Add support for bfloat16.
Add ncclAvg reduction operation.
Improve performance for aggregated operations.
Improve performance for tree.
Improve network error reporting.
Add NCCL_NET parameter to force a specific network.
Add NCCL_IB_QPS_PER_CONNECTION parameter to split IB traffic onto multiple queue pairs.
Fix topology detection error in WSL2.
Fix proxy memory elements affinity (improve alltoall performance).
Fix graph search on cubemesh topologies.
Fix hang in cubemesh during NVB connections.
Optimization for Tree allreduce on A100.
Improve aggregation performance.
Use shared buffers for inter-node send/recv.
Add NVTX profiling hooks.
Accelerate alltoall connections by merging communication for all
channels.
Add support for one hop communication through NVLink, for faster
send/recv communication on cubemesh topologies like DGX-1.
Improve alltoall scheduling to better balance intra/inter node
communication.
Increase send/recv parallelism by 8x, each warp sending or
receiving to a different peer.
Net: move to v4.
Net: make flush operation asynchronous to accelerate alltoall.
Net: define maximum number of requests.
Fix hang when using LL128 protocol after 2^31 steps.
Fix#379 : topology injection failing when using less GPUs than
described in the XML.
Fix#394 : protocol mismatch causing hangs or crashes when using
one GPU per node.
Add support for network collectives.
Add support for XML topology dump/injection.
Add text values for GDR and P2P Levels, including "NVL".
Add speed detection for PCI, Infiniband and Ethernet cards.
Add CPU detection for ARM and AMD CPUs.
Add support for adaptive routing on Infiniband.
Change NET plugin API to v3 : merge PCI path and GPU pointer
capability into a single structure and add other properties.
Add LL128 Protocol.
Rewrite the topology detection and tree/ring creation (#179). Improve
tree performance by sending/receiving from different GPUs. Add
model-based tuning to switch between the different algorithms and
protocols.
Rework P2P/SHM detection in containers (#155, #248).
Detect duplicated devices and return an error (#231).
Add tuning for GCP
Added detection of IBM/Power NVLink bridge device.
Add NUMA support to PCI distance calculations.
Added NCCL_IGNORE_CPU_AFFINITY env var.
Fix memory leaks; GithubIssue#180
Compiler warning fix; GithubIssue#178
Replace non-standard variable length arrays. GithubIssue#171
Fix Tree+Shared Memory crash. GithubPR#185
Fix LL cleanup hang during long running DL jobs.
Fix NCCL_RINGS environment variable handling.
Added extra checks to catch repeat calls to ncclCommDestroy() GithubIssue#191
Improve bootstrap socket connection reliability at scale.
Fix hostname hashing issue. GithubIssue#187
Code cleanup to rename all non device files from *.cu to *.cc
Add tree algorithms for allreduce to improve performance at scale.
Add ncclCommAbort() and ncclCommGetAsyncError() to properly handle
network errors and be permit recover.
Detect initial CPU affinity and no longer escape it.
Add support for inter-node communication using sockets and InfiniBand/RoCE.
Improve latency.
Add support for aggregation.
Improve LL/regular tuning.
Remove tests as those are now at github.com/nvidia/nccl-tests .