Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

A data race detector for CUDA C and C++ based on ThreadSanitizer

License

NotificationsYou must be signed in to change notification settings

tudasc/cusan

Repository files navigation

CuSan is tool to find data races between (asynchronous) CUDA calls and the host.

To that end, during compilation with Clang/LLVM, we analyze and instrument CUDA API usage in the target code to track CUDA-specific memory accesses and synchronization semantics.Our runtime then exposes these information toThreadSanitizer (packaged with Clang/LLVM) for the final data race analysis.

Usage

Making use of CuSan consists of two phases:

  1. Compile your code using one the CuSan compiler wrappers, e.g.,cusan-clang++ orcusan-mpic++.This will (a) analyze and instrument the CUDA API, such as kernel calls and their particular memory access semantics (r/w), (b) add ThreadSanitizer instrumentation automatically (-fsanitize=thread), and (c) finally link our runtime library.
  2. Execute the target program for the data race analysis. Our runtime internally calls ThreadSanitizer to expose the CUDA synchronization and memory access semantics.

Example usage

Given the file02_event.c, execute the following for CUDA data race detection:

$ cusan-clang -O3 -g 02_event.c -x cuda -gencode arch=compute_70,code=sm_70 -o event.exe$export TSAN_OPTIONS=ignore_noninstrumented_modules=1$ ./event.exe

Checking CUDA-aware MPI applications

You need to use the MPI correctness checkerMUST, or preload our (very) simple MPI interceptorlibCusanMPIInterceptor.so for CUDA-aware MPI data race detection.These libraries call ThreadSanitizer with the particular access semantics of MPI.Therefore, the combined semantics of CUDA and MPI are properly exposed to ThreadSanitizer to detect data races of data dependent MPI and CUDA calls.

Example usage for MPI

Given the file03_cuda_to_mpi.c, execute the following for CUDA data race detection:

$ cusan-mpic++ -O3 -g 03_cuda_to_mpi.c -x cuda -gencode arch=compute_70,code=sm_70 -o cuda_to_mpi.exe$ LD_PRELOAD=/path/to/libCusanMPIInterceptor.so mpirun -n 2 ./cuda_to_mpi.exe

Note: For avoiding false positives, ThreadSanitizer suppression files might be needed, see for examplesuppression.txt, or documentation forsanitizer special case lists.

Example report

The following is an example report for03_cuda_to_mpi.c of our test suite, where the necessary synchronization is not called:

L.18__global__voidkernel(int*arr,constintN)...L.53int*d_data;L.54cudaMalloc(&d_data,size*sizeof(int));L.55L.56if (world_rank==0) {L.57kernel<<<blocksPerGrid,threadsPerBlock>>>(d_data,size);L.58#ifdefCUSAN_SYNCL.59cudaDeviceSynchronize();// CUSAN_SYNC needs to be definedL.60#endifL.61MPI_Send(d_data,size,MPI_INT,1,0,MPI_COMM_WORLD);
==================WARNING: ThreadSanitizer: data race (pid=579145)  Read of size 8 at 0x7f1587200000 by main thread:    #0 main cusan/test/runtime/03_cuda_to_mpi.c:61:5 (03_cuda_to_mpi.c.exe+0xfad11)  Previous write of size 8 at 0x7f1587200000 by thread T6:    #0 __device_stub__kernel(int*, int) cusan/test/runtime/03_cuda_to_mpi.c:18:47 (03_cuda_to_mpi.c.exe+0xfaaed)  Thread T6 'cuda_stream 0' (tid=0, running) created by main thread at:    #0 cusan::runtime::Runtime::register_stream(cusan::runtime::Stream) <null> (libCusanRuntime.so+0x3b830)    #1 main cusan/test/runtime/03_cuda_to_mpi.c:54:3 (03_cuda_to_mpi.c.exe+0xfabc7)SUMMARY: ThreadSanitizer: data race cusan/test/runtime/03_cuda_to_mpi.c:61:5 in main==================ThreadSanitizer: reported 1 warnings

Caveats ThreadSanitizer and OpenMPI

Known issues (on the Lichtenberg HPC system) to make ThreadSanitizer work with OpenMPI 4.1.6:

  • Intel Compute Runtime requires environment flags to work with sanitizers, seeIntel Compute Runtime issue 376:
    export NEOReadDebugKeys=1export DisableDeepBind=1
  • The sanitizer memory interceptor does not play well with OpenMPI's, seeOpenMPI issue 12819. Need to disablepatcher:
    export OMPI_MCA_memory=^patcher

Building CuSan

CuSan is tested with LLVM version 14 and 18, and CMake version >= 3.20. Use CMake presetsdevelop orreleaseto build.

Dependencies

CuSan was tested on the TUDa Lichtenberg II cluster with:

  • System modules:1) gcc/11.2.0 2) cuda/11.8 3) openmpi/4.1.6 4) git/2.40.0 5) python/3.10.10 6) clang/14.0.6 or 6) clang/18.1.8
  • Optional external libraries:TypeART, FiberPool (both default off)
  • Testing: llvm-lit, FileCheck
  • GPU: Tesla T4 and Tesla V100 (mostly: arch=sm_70)

Build example

CuSan uses CMake to build. Example build recipe (release build, installs to default prefix${cusan_SOURCE_DIR}/install/cusan)

$>cd cusan$> cmake --preset release$> cmake --build build --target install --parallel

Build options

OptionDefaultDescription
CUSAN_TYPEARTOFFUse TypeART library to track memory allocations.
CUSAN_FIBERPOOLOFFUse external library to efficiently manage fibers creation .
CUSAN_SOFTCOUNTEROFFRuntime stats for calls to ThreadSanitizer and CUDA-callbacks. Only use for stats collection, not race detection.
CUSAN_SYNC_DETAIL_LEVELONAnalyze, e.g., memcpy and memcpyasync w.r.t. arguments to determine implicit sync.
CUSAN_LOG_LEVEL_RT3Granularity of runtime logger. 3 is most verbose, 0 is least. For release, set to 0.
CUSAN_LOG_LEVEL_PASS3Granularity of pass plugin logger. 3 is most verbose, 0 is least. For release, set to 0.

[8]ページ先頭

©2009-2025 Movatter.jp