Add scalable init API
* Add new ncclCommInitRankScalable to allow for passing multiple
unique IDs to the init function.
* Spreads the load onto multiple bootstrap roots, allowing for
constant bootstrap time.
* Requires multiple ranks to create a unique ID, and the CPU-side
ID exchange code to call allgather[v] instead of broadcast.
Accelerate init bootstrap operations
* Reduce the number of calls to allgather.
* Allow roots to reply early to ranks when information is already
available.
* Add an option to use ncclNet instead of sockets to perform
bootstrap allgather operations.
Add PAT algorithms for Allgather and ReduceScatter
* Parallel Aggregated Trees, variation of Bruck algorithm.
* Logarithmic number of network steps for small sizes at scale.
* Only supports one rank per node at the moment.
Add support for registered buffers for intra-node communication.
* Allow registered user buffers to be accessed directly intra-node
* Avoids extra copies in algorithms which permit it, saving
memory bandwidth and helping with compute overlap.
Add profiler plugin API
* New plugin API for profiling
* Supports various levels of profiling, with a hierarchy.
Asynchronous graph allocation
* Make calls to cudaMalloc and cudaMemcpy during graph allocation
asynchronous.
* Significantly speeds up graph capture.
Use fatal IB asynchronous events to stop network operation
* Avoids many other error messages
* Only fatal errors are affected; potentially transient errors
(e.g. port down) do not cause an immediate stop.
Set P2P level to PXB on AMD CPUs when using more than 2 GPUs per node
* P2P would cause a significant performance degradation when using
many GPUs, and therefore many interleaved data flows.
* Disable P2P through the CPU when we have 3+ GPUs per node; keep it
enabled when we only have 2 GPUs.
Improve the init logs to report the real NCCL function.
* Make the log report ncclCommInitRank or ncclCommSplit, rather than
the generic ncclCommInitRankFunc.
Add a parameter to set the location of the user configuration file.
* Add NCCL_CONF_FILE environment variable to set where the user's
configuration file resides.
Increase default IB timeout
* Increase IB timeout value from 18 to 20.
* Should help avoid fatal errors on large RoCE systems.
Add new check for nvidia peermem
* On linux kernels 6.6+, /sys/kernel/mm/memory_peers is no longer
present; check for /sys/module/nvidia_peermem/version instead.
Fix old performance regression when mixing small and large operations.
* Improves distribution of work on channels.
Fix crash when NUMA IDs are equal to -1.
* Can happen when a NIC is a virtual NIC, or when linux doesn't
know which NUMA node a device is attached to
* Issue NVIDIA/nccl-tests#233
Fix tree graph search when NCCL_CROSS_NIC is set to 1.
* Would force NCCL to use the balanced_tree pattern, thereby
disabling LL128 on platforms with 1 GPU+1 NIC per PCI switch.
* Would also try to use alternate rings even though it was not
needed.
Compiler tweaks and fixes
* PR #1177
* PR #1228
Fix stack smash
* PR #1325
Fixes for multi-node NVLink + IB operation
Coverity fixes and comments.
Add scalable init API
* Add new ncclCommInitRankScalable to allow for passing multiple
unique IDs to the init function.
* Spreads the load onto multiple bootstrap roots, allowing for
constant bootstrap time.
* Requires multiple ranks to create a unique ID, and the CPU-side
ID exchange code to call allgather[v] instead of broadcast.
Accelerate init bootstrap operations
* Reduce the number of calls to allgather.
* Allow roots to reply early to ranks when information is already
available.
* Add an option to use ncclNet instead of sockets to perform
bootstrap allgather operations.
Add PAT algorithms for Allgather and ReduceScatter
* Parallel Aggregated Trees, variation of Bruck algorithm.
* Logarithmic number of network steps for small sizes at scale.
* Only supports one rank per node at the moment.
Add support for registered buffers for intra-node communication.
* Allow registered user buffers to be accessed directly intra-node
* Avoids extra copies in algorithms which permit it, saving
memory bandwidth and helping with compute overlap.
Add profiler plugin API
* New plugin API for profiling
* Supports various levels of profiling, with a hierarchy.
Asynchronous graph allocation
* Make calls to cudaMalloc and cudaMemcpy during graph allocation
asynchronous.
* Significantly speeds up graph capture.
Use fatal IB asynchronous events to stop network operation
* Avoids many other error messages
* Only fatal errors are affected; potentially transient errors
(e.g. port down) do not cause an immediate stop.
Set P2P level to PXB on AMD CPUs when using more than 2 GPUs per node
* P2P would cause a significant performance degradation when using
many GPUs, and therefore many interleaved data flows.
* Disable P2P through the CPU when we have 3+ GPUs per node; keep it
enabled when we only have 2 GPUs.
Improve the init logs to report the real NCCL function.
* Make the log report ncclCommInitRank or ncclCommSplit, rather than
the generic ncclCommInitRankFunc.
Add a parameter to set the location of the user configuration file.
* Add NCCL_CONF_FILE environment variable to set where the user's
configuration file resides.
Increase default IB timeout
* Increase IB timeout value from 18 to 20.
* Should help avoid fatal errors on large RoCE systems.
Add new check for nvidia peermem
* On linux kernels 6.6+, /sys/kernel/mm/memory_peers is no longer
present; check for /sys/module/nvidia_peermem/version instead.
Fix old performance regression when mixing small and large operations.
* Improves distribution of work on channels.
Fix crash when NUMA IDs are equal to -1.
* Can happen when a NIC is a virtual NIC, or when linux doesn't
know which NUMA node a device is attached to
* Issue NVIDIA/nccl-tests#233
Fix tree graph search when NCCL_CROSS_NIC is set to 1.
* Would force NCCL to use the balanced_tree pattern, thereby
disabling LL128 on platforms with 1 GPU+1 NIC per PCI switch.
* Would also try to use alternate rings even though it was not
needed.
Compiler tweaks and fixes
* PR #1177
* PR #1228
Fix stack smash
* PR #1325
Fixes for multi-node NVLink + IB operation
Coverity fixes and comments.
Rework core for NVIDIA Trusted Computing
* Compress work structs so that they are shared between channels
* Utilize the full amount of kernel argument space permitted (4k)
before resorting to work fifo.
* Rework the task preprocessing phase.
* Use a separate abortDevFlag which is kept in sync with abortFlag
using cudaMemcpy operations.
* Rename src/include/align.h to src/include/bitops.h
Add lazy connection establishment for collective operations
* Move buffer allocation and connection establishment to the first
collective operation using that algorithm.
* Accelerate init time and reduce memory usage.
* Avoid allocating NVLS buffers if all calls are registered.
* Compute algo/proto in ncclLaunchCollTasksInfo early on.
* Connect peers in ncclCollPreconnectFunc if not connected already.
* Also move shared buffer creation to the first send/recv call.
Accelerate intra-node NVLink detection
* Make each rank only detect NVLinks attached to its GPU.
* Fuse XMLs to reconstruct the full NVLink topology
Add init profiling to report time spend in different init phases.
* Report timings of bootstrap, allgather, search, connect, etc.
* Add new "PROFILE" category for NCCL_DEBUG_SUBSYS.
Add support for PCI p2p on split PCI switches
* Detect split PCI switches through a kernel module exposing
switch information.
* Update the topology XML and graph to add those inter-switch
connections.
Add cost estimation API
* Add a new ncclGroupEndSimulate primitive to return the estimated
time a group would take.
Net/IB: Add separate traffic class for fifo messages
* Add NCCL_IB_FIFO_TC to control the traffic class of fifo messages
independently from NCCL_IB_TC.
Merges PR #1194
Net/IB: Add support for IB router
* Use flid instead of lid if subnets do not match
* Warn if flid is 0
Optimizations and fixes for device network offload (unpack)
* Double the default number of channels
* Cache netDeviceType
* Fix save/increment head logic to enable Tree support.
Support ncclGroupStart/End for ncclCommAbort/Destroy
* Allow Abort/Destroy to be called within a group when managing
multiple GPUs with a single process.
Improve Tuner API
* Provide to the plugin the original cost table so that the plugin
can leave unknown or disabled algo/proto combinations untouched.
* Remove nvlsSupport and collnetSupport.
Do not print version to stdout when using a debug file
* Also print version from all processes with INFO debug level.
Fixes issue #1271
Fix clang warnings in NVTX headers
* Update NVTX headers to the latest version
Fixes issue #1270
Disable port fusion in heterogeneous systems
* Do not fuse ports if a mix of multi-port and single port are detected.
Fix NVLS graphs search for dual NICs.
* Fix NVLS graph search when we have more than one NIC per GPU.
Fix crash with collnetDirect
* Add separate graph search for collnetDirect, testing alltoall paths
and working similarly to the NVLS search.
Fix hang when nodes have different CPU types
* Add the CPU type to the rank peer info.
* Align all ranks on the CPU type after the first allgather.
* Only use the aligned CPU type for all tuning operations.
Fixes issue #1136
Fixes issue #1184
Fix performance of registered send/recv operations
* Allow for single full size operations
* Add INFO to confirm the registration of send/recv buffers.
Move all sync ops to finalize stage
* Ensure ncclCommDestroy is non-blocking if ncclCommFinalize has
been called.
Improve error reporting during SHM segment creation
Improve support of various compilers
Merges PR #1177
Merges PR #1228
Allow net and tuner plugins to be statically linked
* Search for ncclNet or ncclTuner symbols in the main binary.
Merges PR #979
Plugin examples includes cleanup
* Harmonize err.h and common.h usage.
* Add mixed plugin with both net and tuner.
Add support for IB SHARP 1PPN operation with user buffers.
Improve support for MNNVL, add NVLS support and multi-clique support.
* Detect the NVLS clique through NVML
* Exchange XML between peers in the same NVLS clique and fuse XMLs
before creating the topology graph.
* Rework bootstrap allgather algorithms to allow for large allgather
operations intra-node (XML exchange).
Net/IB: add support for dynamic GID detection.
* Automatically select RoCEv2/IPv4 interface by default. Allow to
select IPv6 or even the network/mask.
Reduce NVLS memory usage.
* Add stepSize as property of a connection to allow for different
sizes on different peers; set it to 128K for NVLink SHARP.
Improve tuner loading
* Look for more paths, be more consistent with the network device
plugin.
* Also search for tuner support inside the net plugin.
Improve tuner API
* Add context to support multi-device per process.
Add magic number around comm object to detect comm corruption.
* Add some basic check around communicators so that we can report a
problem when a communicator gets corrupted or a wrong comm pointer
is passed to NCCL.
Fix net/IB error path. Github PR #1164
Fix collnet rail mapping with split comm.
Fix packet reordering issue causing bootstrap mismatch
* Use a different tag in ncclTransportP2pSetup for the connectInfo
exchange and the following barrier.
Fix hang when crossNic is inconsistent between ranks.
Fix minCompCap/maxCompCap computation. Github issue #1184
Bug in plan enqueue logic where plans could be silently not launched for some communicators. Triggered when both are true:
1. Multiple communicators per ncclGroup.
2. Communicators within a group have different plan counts.
2. Intra-process launch barrier disabled.
Add support for alternating rings, allow for cross-nic rings without
cross-rail communication.
Add support for user buffer registration for network send/recv.
Optimize aggregated operations to better utilize all channels.
Add flattening for BCM PCI gen5 switches.
Add support for inter-node NVLink communication
Add support for port fusion in NET/IB.
Add support for ReduceScatter and AllGather using Collnet.
Update net API to v8.
Fix hang during A2A connection.
CPUID writes to EAX, EBX, ECX, and EDX so the inline-asm must state that.
Otherwise currently in-use register might get overwritten which may
cause all kinds of failures like segfaults or wrong results.
Alternatively `__cpuid` can be used which avoids this and related issues.
So do that as suggested in the GCC issue https://gcc.gnu.org/bugzilla/show_bug.cgi?id=112513
Split transport connect phase into multiple steps to avoid port
exhaustion when connecting alltoall at large scale. Defaults to 128
peers per round.
Fix memory leaks on CUDA graph capture.
Fix alltoallv crash on self-sendrecv.
Make topology detection more deterministic when PCI speeds are not
available (fix issue #1020).
Properly close shared memory in NVLS resources.
Revert proxy detach after 5 seconds.
Add option to print progress during transport connect.
Add option to set NCCL_DEBUG to INFO on first WARN.
Add local user buffer registration for NVLink SHARP.
Add tuning plugin support.
Increase net API to v7 to allow for device-side packet reordering;
remove support for v4 plugins.
Add support for RoCE ECE.
Add support for C2C links.
Better detect SHM allocation failures to avoid crash with Bus Error.
Fix missing thread unlocks in bootstrap (Fixes#936).
Disable network flush by default on H100.
Move device code from src/collectives/device to src/device.
We are trying to use the copy result of first cudaMemcpyAsync in the
second cudaMemcpyAsync without sync in between. This patch fixes it
by allocating a CPU side array to cache device side addr so that we
can avoid this consecutive cuda mem copy.
Fixes#957
Fix NVLS search (issue #931).
Increase max IB NICs to 32.
Fix inconsistent device ordering (issue #820).
Try to use different devices for different GPUs in systems with
more than one NIC per GFU.
Fix data corruption with Tree/LL128 on systems with 1GPU:1NIC.
Fix hang with Collnet on bfloat16 on systems with less than one NIC
per GPU.
Fix long initialization time.
Fix data corruption with Collnet when mixing multi-process and
multi-GPU per process.
Fix crash when shared memory creation fails.
Fix Avg operation with Collnet/Chain.
Fix performance of alltoall at scale with more than one NIC per GPU.
Fix performance for DGX H800.
Fix race condition in connection progress causing a crash.
Fix network flush with Collnet.
Fix performance of aggregated allGather/reduceScatter operations.
Fix PXN operation when CUDA_VISIBLE_DEVICES is set.
Fix NVTX3 compilation issues on Debian 10.
Add support for IB SHARP to NVLS (NVLink SHARP algorithm).
Add NVLS+Tree algorithm.
Add support for memory management using cuMem* functions.
Use all NICs for Send/Receive operations on systems with more than
one NIC per GPU (#804).
Add ncclCommSplit primitive, with resource sharing option in config.
Fix alltoallv hang (#788)
Increase number of channels on H100 when we're not limited by NVLink.
Improve error reporting in case of IB failure, printing local and
remote ID (#779).
Add build option to allow compilation against RDMA includes instead
of dynamically loading IB verbs symbols (#802).
Fix context creation for progress thread (#803).
NET/IB: add option to use multiple QPs in round-robin mode.
Fix tree performance issue when NVB is disabled on HCM topologies.
Add new NVLS algorithm for allreduce using NVLink SHARP (intra-node only).
Add new config options: cgaClusterSize, minCTAs, maxCTAs, netName.
Enable LL128 when we use PXN to close rings.
NVTX3 includes update.
Fix crash when one CollNet (SHARP) rail fails to initialize.
Add support for 400Gbit NDR network adapters (CX7)
Handle EINTR in socket poll() function
Add NCCL_PROGRESS_APPENDOP_FREQ to control op append overhead
Resource cleanup fixes
Fix double free in case of init failure
Fix crash in ncclCommAbort
Revert AMD speed commit
NCCL Net v4 supports a maximum handle size of 64 bytes whereas the
ext-net example header files set it for NCCL Net v3. Since,
`aws-ofi-nccl` plugin plans to follow the example header files, fix it
here.
Signed-off-by: Rashika Kheria <rashika@amazon.com>
Add support for CUDA 12.0, drop Kepler (sm_35).
Support for H100 features.
Make socket code more robust and protected. Solves #555.
Improve performance on large CUDA graphs, reducing dependencies.
Reduce inter-socket bandwidth on AMD CPUs to favor better paths.
Various fixes to ncclCommAbort.
Make service thread polling resistant to EINTR.
Compile with profiling API by default.
Extend NVTX instrumentation with call arguments.
Fix crash with CollnetChain on some node topologies
Fix hang when interleaving the capture of different graphs
Fix hang during init in multi-threaded mode
Fix potential data corruption with LL128 protocol on unaligned buffers.
Fix CPU usage during preconnect
Fixes double-free in the error path for ncclCommInitAll
Workaround hang on H100 with Ring/LL128 on 2 GPUs.
Make sure all calls calling cudaMalloc (including devCommSetup) are
called before the last bootstrapBarrier. That way, we avoid calls to
cudaMalloc be blocked by a NCCL kernel launched on another GPU by
another thread which completed init faster.
Resolve#623.
Add support for improved fault tolerance: non-blocking mode, new
init function with config, and ncclCommFinalize function.
Reintroduce collnet+chain algorithm, alongside collnet+direct.
Add LL protocol for intra-node P2P (on by default) and network
communication (off by default).
Use network instead of shared memory when performance is better.
Fix: wait for CUDA graph destroy before destroying comm with linked
graph resources.
Remove aggressive polling during enqueue.
Fix DMABUF fallback on MOFED 5.4 and earlier.
Summary: NCCL_DEBUG_FILE does not work properly since the recent v2.13.4 updates (https://github.com/NVIDIA/nccl/pull/682) because it nows sets `ncclDebugLevel` after parse `NCCL_DEBUG_FILE`. This patch move parsing `tempNcclDebugLevel` before processing `NCCL_DEBUG_FILE` to ensure `NCCL_DEBUG_FILE` is parsed only when `NCCL_DEBUG > NCCL_LOG_VERSION` (same as previous behavior)
Differential Revision: D38415208
fbshipit-source-id: 5689bbb798e73efb9e8594557666987f07e89a30
Optimize CUDA graph launch; avoid launching a CPU callback for
intra-node operations.
Simplify kernel common code to improve the latency of send/recv
operations.
Strengthen CUDA streams semantics.
Change NET API to v6, to add dmabuf support.
Add ncclGetLastError() function.
Add ncclRemoteError code and use it for remote network errors.
Support the use of a different NCCL_NET parameter per communicator.
Add support for SHM and P2P transfers using cudaMemcpy.
Improve allreduce performance when we have more than one network interface per
GPU and we need to use PXN to close rings.
Add support for PCI Gen5 on 5.4 kernels.
Fix crash when setting NCCL_SET_THREAD_NAME.
Fix random crash in init due to uninitialized struct.
Fix hang on cubemesh topologies.
Add P2P_DIRECT_DISABLE parameter to disable direct access to pointers within a
process.