245 Commits

Author SHA1 Message Date
Sylvain Jeaugey
0b083e5209 2.18.6-1 v2.18.6-1 2023-10-10 00:34:18 -07:00
Kaiming Ouyang
4365458757 Fix cudaMemcpyAsync bug
We are trying to use the copy result of first cudaMemcpyAsync in the
second cudaMemcpyAsync without sync in between. This patch fixes it
by allocating a CPU side array to cache device side addr so that we
can avoid this consecutive cuda mem copy.

Fixes #957
2023-09-20 05:51:14 -07:00
Sylvain Jeaugey
559b70f86c 2.18.5-1
Fix NVLS search (issue #931).
Increase max IB NICs to 32.
Fix inconsistent device ordering (issue #820).
Try to use different devices for different GPUs in systems with
more than one NIC per GFU.
v2.18.5-1
2023-08-23 06:32:36 -07:00
Sylvain Jeaugey
8ed014bae9 Fix inter-node NVLS graph search
We were passing a net ID instead of a gpu index, which could cause
crashes if those were unrelated (and they usually are).

Issue #931
2023-08-02 07:06:35 -07:00
Dmitrii Gabor
6e24ef4e1f Prevent WR index truncation in the InfiniBand transport plugin 2023-06-28 11:39:19 +02:00
Sylvain Jeaugey
ea38312273 2.18.3-1
Fix data corruption with Tree/LL128 on systems with 1GPU:1NIC.
Fix hang with Collnet on bfloat16 on systems with less than one NIC
per GPU.
Fix long initialization time.
Fix data corruption with Collnet when mixing multi-process and
multi-GPU per process.
Fix crash when shared memory creation fails.
Fix Avg operation with Collnet/Chain.
Fix performance of alltoall at scale with more than one NIC per GPU.
Fix performance for DGX H800.
Fix race condition in connection progress causing a crash.
Fix network flush with Collnet.
Fix performance of aggregated allGather/reduceScatter operations.
Fix PXN operation when CUDA_VISIBLE_DEVICES is set.
Fix NVTX3 compilation issues on Debian 10.
v2.18.3-1
2023-06-14 01:29:17 -07:00
Sylvain Jeaugey
d97a32fac8 2.18.1-1
Add support for IB SHARP to NVLS (NVLink SHARP algorithm).
Add NVLS+Tree algorithm.
Add support for memory management using cuMem* functions.
Use all NICs for Send/Receive operations on systems with more than
one NIC per GPU (#804).
Add ncclCommSplit primitive, with resource sharing option in config.
Fix alltoallv hang (#788)
Increase number of channels on H100 when we're not limited by NVLink.
Improve error reporting in case of IB failure, printing local and
remote ID (#779).
Add build option to allow compilation against RDMA includes instead
of dynamically loading IB verbs symbols (#802).
Fix context creation for progress thread (#803).
NET/IB: add option to use multiple QPs in round-robin mode.
Fix tree performance issue when NVB is disabled on HCM topologies.
v2.18.1-1
2023-04-18 03:58:25 -07:00
David Addison
9b7d5edbfc
Merge pull request #822 from KaimingOuyang/github/pytorch-hang-fix
Shutdown socket before close in ncclSocketClose()
2023-04-14 19:52:45 -07:00
Kaiming Ouyang
006b6bc7dc Add a comment to shutdown() in ncclSocketClose 2023-04-13 09:13:44 -07:00
Kaiming Ouyang
367e9b61c3 Shutdown socket before close in ncclSocketClose() 2023-04-13 09:11:52 -07:00
Sylvain Jeaugey
5d3ab08b69 2.17.1-1
Add new NVLS algorithm for allreduce using NVLink SHARP (intra-node only).
Add new config options: cgaClusterSize, minCTAs, maxCTAs, netName.
Enable LL128 when we use PXN to close rings.
NVTX3 includes update.
Fix crash when one CollNet (SHARP) rail fails to initialize.
v2.17.1-1
2023-03-01 00:39:04 -08:00
Sylvain Jeaugey
f3d5166783 2.16.5-1
Add support for 400Gbit NDR network adapters (CX7)
Handle EINTR in socket poll() function
Add NCCL_PROGRESS_APPENDOP_FREQ to control op append overhead
Resource cleanup fixes
Fix double free in case of init failure
Fix crash in ncclCommAbort
Revert AMD speed commit
v2.16.5-1
2023-02-02 12:52:47 -08:00
Rashika Kheria
93840e7476 Fix maximum handle size for NCCL Net v4 API
NCCL Net v4 supports a maximum handle size of 64 bytes whereas the
ext-net example header files set it for NCCL Net v3. Since,
`aws-ofi-nccl` plugin plans to follow the example header files, fix it
here.

Signed-off-by: Rashika Kheria <rashika@amazon.com>
2023-01-18 13:31:57 +01:00
Sylvain Jeaugey
28189e2df8 2.16.2-1
Add support for CUDA 12.0, drop Kepler (sm_35).
Support for H100 features.
Make socket code more robust and protected. Solves #555.
Improve performance on large CUDA graphs, reducing dependencies.
Reduce inter-socket bandwidth on AMD CPUs to favor better paths.
Various fixes to ncclCommAbort.
Make service thread polling resistant to EINTR.
Compile with profiling API by default.
Extend NVTX instrumentation with call arguments.
v2.16.2-1
2022-11-30 02:31:59 -08:00
Sylvain Jeaugey
614b49f0de Fix google-fastsocket plugin build 2022-11-22 02:13:13 -08:00
Sylvain Jeaugey
55b1d8ab98 Add documentation for NCCL NET plugins
Also repurpose dummy plugin as example, including headers and
compat layers from v6 to v2.
2022-11-22 02:12:53 -08:00
Sylvain Jeaugey
2f4cb874ba Merge tag 'v2.15.5-1' 2022-10-25 01:15:22 -07:00
Sylvain Jeaugey
cb111f764a 2.15.5-1
Fix crash with CollnetChain on some node topologies
Fix hang when interleaving the capture of different graphs
Fix hang during init in multi-threaded mode
Fix potential data corruption with LL128 protocol on unaligned buffers.
Fix CPU usage during preconnect
Fixes double-free in the error path for ncclCommInitAll
Workaround hang on H100 with Ring/LL128 on 2 GPUs.
v2.15.5-1
2022-10-25 00:55:55 -07:00
Sylvain Jeaugey
d128d62238 Merge tag 'v2.15.1-1' 2022-10-07 11:00:26 -07:00
John Bachan
2401f4a918 Fixes a double-free in the error path of ncclCommInitAll.
Fixes https://github.com/NVIDIA/nccl/issues/726
2022-10-03 17:12:32 -07:00
Sylvain Jeaugey
da8152e57a 2.15.1-1
Add support for H100 (sm90).
Make sure NCCL kernel honor user stream priorities.
v2.15.1-1
2022-09-27 02:31:13 -07:00
Sylvain Jeaugey
99c28f2e75 Merge remote-tracking branch 'origin/master' 2022-09-27 02:24:41 -07:00
Cliff Woolley
78313a6d21 Use compatibility shim only with static cudart
Closes issue 658
2022-09-27 02:22:48 -07:00
Sylvain Jeaugey
ecab28a7c9 Fix potential deadlock during init in multi-thread mode.
Make sure all calls calling cudaMalloc (including devCommSetup) are
called before the last bootstrapBarrier. That way, we avoid calls to
cudaMalloc be blocked by a NCCL kernel launched on another GPU by
another thread which completed init faster.

Resolve #623.
2022-09-26 02:13:10 -07:00
Jane Xu
f89fd4777d address review comments 2022-09-20 11:58:33 +02:00
Jane Xu
79fb0326ac Fix intermittent 11.6 builds: generate unique .cu file for each object file 2022-09-20 11:58:33 +02:00
Sylvain Jeaugey
c4e2aa6c79 2.14.3-1
Add support for improved fault tolerance: non-blocking mode, new
init function with config, and ncclCommFinalize function.
Reintroduce collnet+chain algorithm, alongside collnet+direct.
Add LL protocol for intra-node P2P (on by default) and network
communication (off by default).
Use network instead of shared memory when performance is better.
Fix: wait for CUDA graph destroy before destroying comm with linked
graph resources.
Remove aggressive polling during enqueue.
Fix DMABUF fallback on MOFED 5.4 and earlier.
v2.14.3-1
2022-08-18 02:53:17 -07:00
Ching-Hsiang Chu
e1d9b273b0 fix NCCL_DEBUG_FILE
Summary: NCCL_DEBUG_FILE does not work properly since the recent v2.13.4 updates (https://github.com/NVIDIA/nccl/pull/682) because it nows sets `ncclDebugLevel` after parse `NCCL_DEBUG_FILE`. This patch move parsing `tempNcclDebugLevel` before processing `NCCL_DEBUG_FILE` to ensure `NCCL_DEBUG_FILE` is parsed only when `NCCL_DEBUG > NCCL_LOG_VERSION` (same as previous behavior)

Differential Revision: D38415208

fbshipit-source-id: 5689bbb798e73efb9e8594557666987f07e89a30
2022-08-18 11:50:42 +02:00
Sylvain Jeaugey
19ab67d172 2.13.4-1
Optimize CUDA graph launch; avoid launching a CPU callback for
intra-node operations.
Simplify kernel common code to improve the latency of send/recv
operations.
Strengthen CUDA streams semantics.
Change NET API to v6, to add dmabuf support.
Add ncclGetLastError() function.
Add ncclRemoteError code and use it for remote network errors.
Support the use of a different NCCL_NET parameter per communicator.
Add support for SHM and P2P transfers using cudaMemcpy.
v2.13.4-1
2022-07-11 08:10:34 -07:00
Sylvain Jeaugey
7aa1c46fd5 2.12.12-1
Improve allreduce performance when we have more than one network interface per
GPU and we need to use PXN to close rings.
Add support for PCI Gen5 on 5.4 kernels.
Fix crash when setting NCCL_SET_THREAD_NAME.
Fix random crash in init due to uninitialized struct.
Fix hang on cubemesh topologies.
Add P2P_DIRECT_DISABLE parameter to disable direct access to pointers within a
process.
v2.12.12-1
2022-05-13 00:26:57 -07:00
Sylvain Jeaugey
9bfc1c6e35 Update Makefile to install static library.
Make sure make install also installs the static library. 
Fixes #662
2022-04-08 14:00:43 +02:00
Sylvain Jeaugey
8133784b32 Merge remote-tracking branch 'origin/master' 2022-03-30 02:29:05 -07:00
Sylvain Jeaugey
353e8ba446 2.12.10-1
Fix bug with CollNet
Fix bug with zero-bytes send/recv operations
Fix NCCL_PARAM implementation to avoid taking a lock on every call
Fix bug when setting NCCL_IB_QPS_PER_CONNECTION to more than one.
Improve error reporting for network errors.
v2.12.10-1
2022-03-30 02:27:01 -07:00
Sylvain Jeaugey
2247152a8e Fix merging error 2022-03-30 02:14:32 -07:00
Sylvain Jeaugey
2dfd83752c
Merge branch 'master' into truncated_msg_warning 2022-03-30 10:58:05 +02:00
Ke Wen
1382a87306 Display host name instead of numeric IP when referring to a peer
For easier interpretation of debug messages like "connection closed by
peer", "peer message truncated" and "peer collective mismatch"
2022-03-30 10:47:10 +02:00
Christopher Hesse
b895abcdb8 Fix typo in net_ib.cc 2022-03-30 10:45:01 +02:00
Felix Abecassis
1c7c014ceb Remove unnecessary newline in plugin logging
Signed-off-by: Felix Abecassis <fabecassis@nvidia.com>
2022-03-30 10:44:49 +02:00
John Bachan
44eb40da0e Add pthread_detach()'s for threads we never pthread_join(). Helps
reduce diagnostic noise for ThreadSanitizer.

Fixes https://github.com/NVIDIA/nccl/issues/649
2022-03-15 10:27:59 -07:00
Sylvain Jeaugey
3c223c105a 2.12.7-1
Add network communication through another GPU connected with NVLink
(PXN).
Add aggregation of messages coming from different local GPUs through
PXN and going to the same destination.
Add new v5 plugin API with grouped receives and tags.
Add compat for v4 plugins.
Add naming of NCCL threads to help debugging.
Fix NVLink detection and avoid data corruption when some NVLinks are
down.
Add support for Relaxed Ordering for IB.
Add profiling and timing infrastructure.
v2.12.7-1
2022-03-02 20:48:56 +01:00
Ke Wen
fbfb6ac5d7 Split IB parameter sanity check into two parts
First part on collective mismatch, second part on internal errors
2022-02-08 15:21:22 -08:00
Sylvain Jeaugey
0144073673 Fix ext-net/google-fastsocket build 2022-01-24 07:19:48 -08:00
Sylvain Jeaugey
cc78e9fab8 Revert "remove unused basePath"
This reverts commit 445bc1965720787aa19c8fc1c0bf62db43db2dda.
2022-01-21 12:30:34 +01:00
void-main
445bc19657 remove unused basePath 2022-01-21 12:12:26 +01:00
Chang Lan
c5790b3672 Build fastsocket plugin from ext-net 2021-12-09 08:41:05 +01:00
Ke Wen
c88c9f873f Add env NCCL_NET_DISABLE_INTRA
Disable NET transport for intra-node communication by setting the env to 1
It provides an option to error out instead of falling back to NET when superior intra-node transports (P2P and SHM) are unavailable
2021-12-08 16:28:19 +01:00
Ke Wen
f589932130 Improve warning message about truncated messages
Display hints of cause so that it would be easier for user to debug.
Also change the error type from InternalError to InvalidUsage as most
of time this is caused by a mismatch in collective size or env settings.
2021-12-02 16:13:15 -08:00
Chris Jones
8cf7325d69 Perform busIdToInt64 on the stack.
I noticed when I enabled `NCCL_DEBUG_SUBSYS=ALLOC` that this function is
called thousands of times, making the log output unintelligible.
Fortunately, this function can be implemented without heap allocations.
2021-11-19 09:35:55 +01:00
John Bachan
30ca3fcacf Fix compilation failure in "src/enqueue.cc" on older GCC because of
missing `#include <cstring>`.
2021-09-23 09:55:16 -07:00
Sylvain Jeaugey
4ec992fab7 Fix Collnet when GDR is disabled 2021-09-22 05:19:16 -07:00