235 Commits

Author SHA1 Message Date
Sylvain Jeaugey
5d3ab08b69 2.17.1-1
Add new NVLS algorithm for allreduce using NVLink SHARP (intra-node only).
Add new config options: cgaClusterSize, minCTAs, maxCTAs, netName.
Enable LL128 when we use PXN to close rings.
NVTX3 includes update.
Fix crash when one CollNet (SHARP) rail fails to initialize.
v2.17.1-1
2023-03-01 00:39:04 -08:00
Sylvain Jeaugey
f3d5166783 2.16.5-1
Add support for 400Gbit NDR network adapters (CX7)
Handle EINTR in socket poll() function
Add NCCL_PROGRESS_APPENDOP_FREQ to control op append overhead
Resource cleanup fixes
Fix double free in case of init failure
Fix crash in ncclCommAbort
Revert AMD speed commit
v2.16.5-1
2023-02-02 12:52:47 -08:00
Rashika Kheria
93840e7476 Fix maximum handle size for NCCL Net v4 API
NCCL Net v4 supports a maximum handle size of 64 bytes whereas the
ext-net example header files set it for NCCL Net v3. Since,
`aws-ofi-nccl` plugin plans to follow the example header files, fix it
here.

Signed-off-by: Rashika Kheria <rashika@amazon.com>
2023-01-18 13:31:57 +01:00
Sylvain Jeaugey
28189e2df8 2.16.2-1
Add support for CUDA 12.0, drop Kepler (sm_35).
Support for H100 features.
Make socket code more robust and protected. Solves #555.
Improve performance on large CUDA graphs, reducing dependencies.
Reduce inter-socket bandwidth on AMD CPUs to favor better paths.
Various fixes to ncclCommAbort.
Make service thread polling resistant to EINTR.
Compile with profiling API by default.
Extend NVTX instrumentation with call arguments.
v2.16.2-1
2022-11-30 02:31:59 -08:00
Sylvain Jeaugey
614b49f0de Fix google-fastsocket plugin build 2022-11-22 02:13:13 -08:00
Sylvain Jeaugey
55b1d8ab98 Add documentation for NCCL NET plugins
Also repurpose dummy plugin as example, including headers and
compat layers from v6 to v2.
2022-11-22 02:12:53 -08:00
Sylvain Jeaugey
2f4cb874ba Merge tag 'v2.15.5-1' 2022-10-25 01:15:22 -07:00
Sylvain Jeaugey
cb111f764a 2.15.5-1
Fix crash with CollnetChain on some node topologies
Fix hang when interleaving the capture of different graphs
Fix hang during init in multi-threaded mode
Fix potential data corruption with LL128 protocol on unaligned buffers.
Fix CPU usage during preconnect
Fixes double-free in the error path for ncclCommInitAll
Workaround hang on H100 with Ring/LL128 on 2 GPUs.
v2.15.5-1
2022-10-25 00:55:55 -07:00
Sylvain Jeaugey
d128d62238 Merge tag 'v2.15.1-1' 2022-10-07 11:00:26 -07:00
John Bachan
2401f4a918 Fixes a double-free in the error path of ncclCommInitAll.
Fixes https://github.com/NVIDIA/nccl/issues/726
2022-10-03 17:12:32 -07:00
Sylvain Jeaugey
da8152e57a 2.15.1-1
Add support for H100 (sm90).
Make sure NCCL kernel honor user stream priorities.
v2.15.1-1
2022-09-27 02:31:13 -07:00
Sylvain Jeaugey
99c28f2e75 Merge remote-tracking branch 'origin/master' 2022-09-27 02:24:41 -07:00
Cliff Woolley
78313a6d21 Use compatibility shim only with static cudart
Closes issue 658
2022-09-27 02:22:48 -07:00
Sylvain Jeaugey
ecab28a7c9 Fix potential deadlock during init in multi-thread mode.
Make sure all calls calling cudaMalloc (including devCommSetup) are
called before the last bootstrapBarrier. That way, we avoid calls to
cudaMalloc be blocked by a NCCL kernel launched on another GPU by
another thread which completed init faster.

Resolve #623.
2022-09-26 02:13:10 -07:00
Jane Xu
f89fd4777d address review comments 2022-09-20 11:58:33 +02:00
Jane Xu
79fb0326ac Fix intermittent 11.6 builds: generate unique .cu file for each object file 2022-09-20 11:58:33 +02:00
Sylvain Jeaugey
c4e2aa6c79 2.14.3-1
Add support for improved fault tolerance: non-blocking mode, new
init function with config, and ncclCommFinalize function.
Reintroduce collnet+chain algorithm, alongside collnet+direct.
Add LL protocol for intra-node P2P (on by default) and network
communication (off by default).
Use network instead of shared memory when performance is better.
Fix: wait for CUDA graph destroy before destroying comm with linked
graph resources.
Remove aggressive polling during enqueue.
Fix DMABUF fallback on MOFED 5.4 and earlier.
v2.14.3-1
2022-08-18 02:53:17 -07:00
Ching-Hsiang Chu
e1d9b273b0 fix NCCL_DEBUG_FILE
Summary: NCCL_DEBUG_FILE does not work properly since the recent v2.13.4 updates (https://github.com/NVIDIA/nccl/pull/682) because it nows sets `ncclDebugLevel` after parse `NCCL_DEBUG_FILE`. This patch move parsing `tempNcclDebugLevel` before processing `NCCL_DEBUG_FILE` to ensure `NCCL_DEBUG_FILE` is parsed only when `NCCL_DEBUG > NCCL_LOG_VERSION` (same as previous behavior)

Differential Revision: D38415208

fbshipit-source-id: 5689bbb798e73efb9e8594557666987f07e89a30
2022-08-18 11:50:42 +02:00
Sylvain Jeaugey
19ab67d172 2.13.4-1
Optimize CUDA graph launch; avoid launching a CPU callback for
intra-node operations.
Simplify kernel common code to improve the latency of send/recv
operations.
Strengthen CUDA streams semantics.
Change NET API to v6, to add dmabuf support.
Add ncclGetLastError() function.
Add ncclRemoteError code and use it for remote network errors.
Support the use of a different NCCL_NET parameter per communicator.
Add support for SHM and P2P transfers using cudaMemcpy.
v2.13.4-1
2022-07-11 08:10:34 -07:00
Sylvain Jeaugey
7aa1c46fd5 2.12.12-1
Improve allreduce performance when we have more than one network interface per
GPU and we need to use PXN to close rings.
Add support for PCI Gen5 on 5.4 kernels.
Fix crash when setting NCCL_SET_THREAD_NAME.
Fix random crash in init due to uninitialized struct.
Fix hang on cubemesh topologies.
Add P2P_DIRECT_DISABLE parameter to disable direct access to pointers within a
process.
v2.12.12-1
2022-05-13 00:26:57 -07:00
Sylvain Jeaugey
9bfc1c6e35 Update Makefile to install static library.
Make sure make install also installs the static library. 
Fixes #662
2022-04-08 14:00:43 +02:00
Sylvain Jeaugey
8133784b32 Merge remote-tracking branch 'origin/master' 2022-03-30 02:29:05 -07:00
Sylvain Jeaugey
353e8ba446 2.12.10-1
Fix bug with CollNet
Fix bug with zero-bytes send/recv operations
Fix NCCL_PARAM implementation to avoid taking a lock on every call
Fix bug when setting NCCL_IB_QPS_PER_CONNECTION to more than one.
Improve error reporting for network errors.
v2.12.10-1
2022-03-30 02:27:01 -07:00
Sylvain Jeaugey
2247152a8e Fix merging error 2022-03-30 02:14:32 -07:00
Sylvain Jeaugey
2dfd83752c
Merge branch 'master' into truncated_msg_warning 2022-03-30 10:58:05 +02:00
Ke Wen
1382a87306 Display host name instead of numeric IP when referring to a peer
For easier interpretation of debug messages like "connection closed by
peer", "peer message truncated" and "peer collective mismatch"
2022-03-30 10:47:10 +02:00
Christopher Hesse
b895abcdb8 Fix typo in net_ib.cc 2022-03-30 10:45:01 +02:00
Felix Abecassis
1c7c014ceb Remove unnecessary newline in plugin logging
Signed-off-by: Felix Abecassis <fabecassis@nvidia.com>
2022-03-30 10:44:49 +02:00
John Bachan
44eb40da0e Add pthread_detach()'s for threads we never pthread_join(). Helps
reduce diagnostic noise for ThreadSanitizer.

Fixes https://github.com/NVIDIA/nccl/issues/649
2022-03-15 10:27:59 -07:00
Sylvain Jeaugey
3c223c105a 2.12.7-1
Add network communication through another GPU connected with NVLink
(PXN).
Add aggregation of messages coming from different local GPUs through
PXN and going to the same destination.
Add new v5 plugin API with grouped receives and tags.
Add compat for v4 plugins.
Add naming of NCCL threads to help debugging.
Fix NVLink detection and avoid data corruption when some NVLinks are
down.
Add support for Relaxed Ordering for IB.
Add profiling and timing infrastructure.
v2.12.7-1
2022-03-02 20:48:56 +01:00
Ke Wen
fbfb6ac5d7 Split IB parameter sanity check into two parts
First part on collective mismatch, second part on internal errors
2022-02-08 15:21:22 -08:00
Sylvain Jeaugey
0144073673 Fix ext-net/google-fastsocket build 2022-01-24 07:19:48 -08:00
Sylvain Jeaugey
cc78e9fab8 Revert "remove unused basePath"
This reverts commit 445bc1965720787aa19c8fc1c0bf62db43db2dda.
2022-01-21 12:30:34 +01:00
void-main
445bc19657 remove unused basePath 2022-01-21 12:12:26 +01:00
Chang Lan
c5790b3672 Build fastsocket plugin from ext-net 2021-12-09 08:41:05 +01:00
Ke Wen
c88c9f873f Add env NCCL_NET_DISABLE_INTRA
Disable NET transport for intra-node communication by setting the env to 1
It provides an option to error out instead of falling back to NET when superior intra-node transports (P2P and SHM) are unavailable
2021-12-08 16:28:19 +01:00
Ke Wen
f589932130 Improve warning message about truncated messages
Display hints of cause so that it would be easier for user to debug.
Also change the error type from InternalError to InvalidUsage as most
of time this is caused by a mismatch in collective size or env settings.
2021-12-02 16:13:15 -08:00
Chris Jones
8cf7325d69 Perform busIdToInt64 on the stack.
I noticed when I enabled `NCCL_DEBUG_SUBSYS=ALLOC` that this function is
called thousands of times, making the log output unintelligible.
Fortunately, this function can be implemented without heap allocations.
2021-11-19 09:35:55 +01:00
John Bachan
30ca3fcacf Fix compilation failure in "src/enqueue.cc" on older GCC because of
missing `#include <cstring>`.
2021-09-23 09:55:16 -07:00
Sylvain Jeaugey
4ec992fab7 Fix Collnet when GDR is disabled 2021-09-22 05:19:16 -07:00
Ke Wen
e11238b302 2.11.4-1
Add new API for creating a reduction operation which multiplies the input by a rank-specific scalar before doing an inter-rank summation (see: ncclRedOpCreatePreMulSum).
Improve CollNet (SHARP) performance of ncclAllReduce when captured in a CUDA Graph via user buffer registration.
Add environment variable NCCL_NET_PLUGIN="<suffix>" to allow user to choose among multiple NCCL net plugins by substituting into "libnccl-net-<suffix>.so".
Fix memory leak of NVB connections.
Fix topology detection of IB Virtual Functions (SR-IOV).
v2.11.4-1
2021-09-08 16:06:23 -07:00
John Bachan
5f2f2f670f Fix to https://github.com/NVIDIA/nccl/issues/560
ncclGroup's containing operations of mixed datatype, element, or collective
would induce crash.
2021-08-31 15:50:05 -07:00
Ke Wen
7e51592129 2.10.3-1
Add support for bfloat16.
Add ncclAvg reduction operation.
Improve performance for aggregated operations.
Improve performance for tree.
Improve network error reporting.
Add NCCL_NET parameter to force a specific network.
Add NCCL_IB_QPS_PER_CONNECTION parameter to split IB traffic onto multiple queue pairs.
Fix topology detection error in WSL2.
Fix proxy memory elements affinity (improve alltoall performance).
Fix graph search on cubemesh topologies.
Fix hang in cubemesh during NVB connections.
v2.10.3-1
2021-07-08 14:30:14 -07:00
Sylvain Jeaugey
3fec2fa5ee 2.9.9-1
Fix crash when setting NCCL_MAX_P2P_NCHANNELS below nchannels.
Fix hang during sendrecv dynamic NVB connection establishment on
cubemesh topologies.
Add environment variable to only use SHARP on communicators beyond
a given number of ranks.
Add debug subsystem to trace memory allocations.
Fix compilation with TRACE=1. (Issue #505)
v2.9.9-1
2021-05-12 11:09:31 -07:00
Sylvain Jeaugey
ca8485b0d0 2.9.8-1
Fix memory leaks.
Fix crash in bootstrap error case.
Fix Collnet clean-up issue.
Make PCI switch vendor/device optional for XML injection.
Add support for nvidia-peermem module.
v2.9.8-1
2021-05-10 14:00:03 -07:00
Sylvain Jeaugey
a46ea10583 2.9.6-1
Add support for CUDA graphs.
Fuse BCM Gen4 switches to avoid suboptimal performance on some platforms. Issue #439.
Fix bootstrap issue caused by connection reordering.
Fix CPU locking block.
Improve CollNet algorithm.
Improve performance on DGX A100 for communicators with only one GPU per node.
v2.9.6-1
2021-04-12 16:00:46 -07:00
Sylvain Jeaugey
911d61f214 2.8.4-1
Fix hang in corner cases of alltoallv using point to point send/recv.
Harmonize error messages.
Fix missing NVTX section in the license.
Update README.
v2.8.4-1
2021-02-09 15:36:48 -08:00
Jonas Zhou
3996562690 x86: Add CPU detection for Zhaoxin processors
Signed-off-by: Jonas Zhou <JonasZhou@zhaoxin.com>
2020-12-17 11:15:18 -08:00
Sylvain Jeaugey
920dbe5b35 2.8.3-1
Optimization for Tree allreduce on A100.
Improve aggregation performance.
Use shared buffers for inter-node send/recv.
Add NVTX profiling hooks.
Accelerate alltoall connections by merging communication for all
channels.
Add support for one hop communication through NVLink, for faster
send/recv communication on cubemesh topologies like DGX-1.
Improve alltoall scheduling to better balance intra/inter node
communication.
Increase send/recv parallelism by 8x, each warp sending or
receiving to a different peer.
Net: move to v4.
Net: make flush operation asynchronous to accelerate alltoall.
Net: define maximum number of requests.
Fix hang when using LL128 protocol after 2^31 steps.
Fix #379 : topology injection failing when using less GPUs than
described in the XML.
Fix #394 : protocol mismatch causing hangs or crashes when using
one GPU per node.
v2.8.3-1
2020-11-17 11:08:52 -08:00
xietingwew
084207e685 fix proxyArgs for trace log 2020-10-21 09:18:40 -07:00