Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Weighted MinHash implementation on CUDA (multi-gpu).

License

NotificationsYou must be signed in to change notification settings

src-d/minhashcuda

Repository files navigation

This project is the reimplementation of Weighted MinHash calculation fromekzhu/datasketch in NVIDIA CUDA and thusbrings 600-1000x speedup over numpy withMKL(Titan X 2016 vs 12-core Xeon E5-1650).It supports running on multiple GPUs to be even faster, e.g., processing 10Mx12Mmatrix with sparsity 0.0014 takes 40 minutes using two Titan Xs.The produced results are bit-to-bit identical to the reference implementation.Read thearticle.

The input format is 32-bit floatCSR matrix.The code is optimized for low memory consumption and speed.

What is Weighted MinHash

MinHash can be used to compress unweighted set or binary vector, and estimateunweighted Jaccard similarity.It is possible to modify MinHash forweighted Jaccardby expanding each item (or dimension) by its weight.However this approach does not support real number weights, anddoing so can be very expensive if the weights are very large.Weighted MinHashis created by Sergey Ioffe, and its performance does not depend on the weights - aslong as the universe of all possible items (or dimension for vectors) is known.This makes it unsuitable for stream processing, when the knowledge of unseenitems cannot be assumed.

Building

cmake -DCMAKE_BUILD_TYPE=Release . && make

It requires cudart, curand >=8.0, OpenMP 4.0 compatible compiler (that is, not gcc <=4.8) andcmake >= 3.2.Ifnumpy headers are not found,specify the includes path with definingNUMPY_INCLUDES.If you do not want to build the Python native module, add-D DISABLE_PYTHON=y.If CUDA is not automatically found, add-D CUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda-8.0(change the path to the actual one).If you are building in a Docker container you may encounter the following error:Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY)This means you need to install the rest of the CUDA toolkit, which can be installed like in thenvidia/cuda:8.0-devrelDockerfile.If you still run intoCould NOT find CUDA (missing: CUDA_INCLUDE_DIRS) then run:ln -s /usr/local/cuda/targets/x86_64-linux/include/* /usr/local/cuda/include/

Python users: if you are using Linux x86-64 and CUDA 8.0, then you caninstall this easily:

pip install libMHCUDA

Otherwise, you'll have to install it from source:

pip install git+https://github.com/src-d/minhashcuda.git

Building in Python virtual environments, e.g. pyenv or conda is officially not supported. You can still submit patches to fix the related problems.

Testing

test.py contains the unit tests based onunittest.They requiredatasketch andscipy.

Contributions

...are welcome! SeeCONTRIBUTING andcode of conduct.

License

Apache 2.0

Python example

importlibMHCUDAimportnumpyfromscipy.sparseimportcsr_matrix# Prepare the rowsnumpy.random.seed(1)data=numpy.random.randint(0,100, (6400,130))mask=numpy.random.randint(0,5,data.shape)data*= (mask>=4)delmaskm=csr_matrix(data,dtype=numpy.float32)deldata# We've got 80% sparse matrix 6400 x 130# Initialize the hasher aka "generator" with 128 hash samples for every rowgen=libMHCUDA.minhash_cuda_init(m.shape[-1],128,seed=1,verbosity=1)# Calculate the hashes. Can be executed several times with different number of rowshashes=libMHCUDA.minhash_cuda_calc(gen,m)# Free the resourceslibMHCUDA.minhash_cuda_fini(gen)

The functions can be easily wrapped into a class (not included).

Python API

Import "libMHCUDA".

defminhash_cuda_init(dim,samples,seed=time(),deferred=False,devices=0,verbosity=0)

Creates the hasher.

dim integer, the number of dimensions in the input. In other words, length of each weight vector.Must be less than 2³².

samples integer, the number of hash samples. The more the value, the more precise are the estimates,but the larger the hash size and the longer to calculate (linear). Must not be primefor performance considerations and less than 2¹⁶.

seed integer, the random generator seed for reproducible results.

deferred boolean, if True, disables the initialization of WMH parameters withrandom numbers. In that case, the user is expected to callminhash_cuda_assign_random_vars() afterwards.

devices integer, bitwise OR-ed CUDA device indices, e.g. 1 means first device, 2 means second device,3 means using first and second device. Special value 0 enables all available devices.Default value is 0.

verbosity integer, 0 means complete silence, 1 means mere progress logging,2 means lots of output.

return integer, pointer to generator struct (opaque).

defminhash_cuda_calc(gen,matrix,row_start=0,row_finish=0xffffffff)

Calculates Weighted MinHash-es. May reallocate memory on GPU but does it's best to reuse the buffers.

gen integer, pointer to generator struct obtained from init().

matrixscipy.sparse.csr_matrix instance, the number of columns must matchdim.The number of rows must be less than 2³¹.

row_start integer, slice start offset (the index of the first row to process).Enables efficient zero-copy sparse matrix slicing.

row_finish integer, slice finish offset (the index of the row after the lastone to process). The resulting matrix row slice is [row-start:row_finish].

returnnumpy.ndarray of shape (number of matrix rows,samples, 2) and dtype uint32.

defminhash_cuda_fini(gen)

Disposes any resources allocated by init() and subsequent calc()-s. Generator pointer is invalidated.

gen integer, pointer to generator struct obtained from init().

C API

Include "minhashcuda.h".

MinhashCudaGenerator*mhcuda_init(uint32_tdim,uint16_tsamples,uint32_tseed,intdeferred,uint32_tdevices,intverbosity,MHCUDAResult*status)

Initializes the Weighted MinHash generator.

dim the number of dimensions in the input. In other words, length of each weight vector.

samples he number of hash samples. The more the value, the more precise are the estimates,but the larger the hash size and the longer to calculate (linear). Must not be primefor performance considerations.

seed the random generator seed for reproducible results.

deferred if set to anything except 0, disables the initialization of WMH parameters withrandom numbers. In that case, the user is expected to callmhcuda_assign_random_vars() afterwards.

devices bitwise OR-ed CUDA device indices, e.g. 1 means first device, 2 means second device,3 means using first and second device. Special value 0 enables all available devices.

verbosity 0 means complete silence, 1 means mere progress logging, 2 means lots of output.

status pointer to the reported return code. May be nullptr. In case of any error, thereturned result is nullptr and the code is stored into *status (with nullptr check).

return pointer to the allocated generator opaque struct.

MHCUDAResultmhcuda_calc(constMinhashCudaGenerator*gen,constfloat*weights,constuint32_t*cols,constuint32_t*rows,uint32_tlength,uint32_t*output)

Calculates the Weighted MinHash-es for the specified CSR matrix.

gen pointer to the generator opaque struct obtained from mhcuda_init().weights sparse matrix's values.cols sparse matrix's column indices, must be the same size as weights.rows sparse matrix's row indices. The first element is always 0, the last iseffectively the size of weights and cols.length the number of rows. "rows" argument must have the size (rows + 1) because ofthe leading 0.output resulting hashes array of size rows x samples x 2.

return the status code.

MHCUDAResultmhcuda_fini(MinhashCudaGenerator*gen);

Frees any resources allocated by mhcuda_init() and mhcuda_calc(), including device buffers.Generator pointer is invalidated.

gen pointer to the generator opaque struct obtained from mhcuda_init().

return the status code.

README {#ignore_this_doxygen_anchor}


[8]ページ先頭

©2009-2025 Movatter.jp