Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

linking against an external library dependency in site_packages#782

Answeredbybumi001
bumi001 asked this question inQ&A
Discussion options

I am simply trying to get the code fromhttps://github.com/daknuett/extension-cpp-meson to install to site_packages and run. It builds and installs but can't resolve symbols when I try to import extension_cpp.

Using ldd shows:

ldd _C.cpython-313-x86_64-linux-gnu.so        linux-vdso.so.1 (0x00007fff93bea000)        libc10.so => not found        libtorch_cpu.so => not found        libstdc++.so.6 => /lib/x86_64-linux-gnu/libstdc++.so.6 (0x0000794e5e600000)        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x0000794e5e5d2000)        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x0000794e5e200000)        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x0000794e5e4e9000)        /lib64/ld-linux-x86-64.so.2 (0x0000794e5e8e0000)

Both libc10.so and libtorch_cpu.so are available in .venv/lib/python3.13/site-packages/torch/lib

ls .venv/lib/python3.13/site-packages/torch/liblibc10.so       libcaffe2_nvrtc.so  libshm.so    libtorch_cpu.so   libtorch_cuda_linalg.so  libtorch_python.solibc10_cuda.so  libgomp.so.1        libtorch.so  libtorch_cuda.so  libtorch_global_deps.so

How do I change meson.build so that _C.cpython-313-x86_64-linux-gnu.so contains the correct path for libc10.so and libtorch_cpu.so?

PS: The unresolved symbol is the same one as in this discussion:
vllm-project/vllm#5501

You must be logged in to vote

Turns out I didn't need to use cpp_args or add_project_arguments(). But I needed to add the following to the meson.build that@daknuett has.

min_supported_cpython = run_command(python, ['-c', 'import torch.utils.cpp_extension; print(torch.utils.cpp_extension.min_supported_cpython)'], check: true).stdout().strip()add_global_arguments('-D_Py_LIMITED_API=@0@'.format(min_supported_cpython), language: 'cpp')

Now, nm produces what I have been seeking:

nm -u --demangle _C.cpython-313-x86_64-linux-gnu.so | grep DispatchKey                 U c10::impl::ExcludeDispatchKeyGuard::ExcludeDispatchKeyGuard(c10::DispatchKeySet)                 U c10::impl::ExcludeDispatchKeyGuard::~ExcludeDispatchKey…

Replies: 5 comments 18 replies

Comment options

Broadly speaking, this is impossible for library technology to do in the first place.

libtorch_cpu.so etc can be in any directory, because pytorch itself could be installed to any site-packages directory insys.path, therefore there is no way to compute the correct location to compile into your own extension.

You need indirection for this, in the form of e.g. capsule imports, but that assumes that the project in question (pytorch, here) supports such usage. The other approach to this is that your extension is "broken" by default but works when imported by a python module that first dynamically computes the search path and then sets things up. I don't know what pytorch users usually do (I don't use pytorch myself).

It's possible that some solutions might be "good enough", e.g. hardcoding the absolute path on the build machine, which means all users must compile their own copy of the package. You could also assume that people don't install to multiple sys.path directories (they actually do, but maybe the users of any given package don't) and simply hardcode an rpath of${ORIGIN}/../torch/lib.

Does pytorch not provide any guidance for this?

You must be logged in to vote
6 replies
@bumi001
Comment options

PyTorch officially only supports setuptools. They are trying to move away from it. You may be interested in their analysis of meson and scikit-build-core atpytorch/pytorch#157807.

@rgommers
Comment options

That is for building PyTorch itself, which is quite different from "build an extension module against the PyTorch C++ API". For the latter, multiple build systems are supported and any build system that sets up the right compile and link flags should work. The question is just how easy it is, which varies:

I think there's not much Meson usage because of the lack of solid CUDA support; CMake is pretty much the standard there. For CPU it shouldn't be too difficult. I like the approach thatdaknuett/extension-cpp-meson is taking to extract paths/flags fromtorch.utils.cpp_extension dynamically, that should be more maintainable than hardcoding paths.

I'll try to find time to test and see what the problem is. Also Cc@daknuett for visibility (I hope you don't mind the ping).

@bumi001
Comment options

I agree with you that the approach of daknuett is a better approach. I hard-coded just to see if it would work without the --no-build-extension flag. Using the meson.build as is from daknuett and with the --no-build-extension flag also produces the same undefined symbol error. I am also trying to figure out why this is happening.

I am wondering what kind of support meson provides for CUDA currently?

@eli-schwartz
Comment options

https://mesonbuild.com/Cuda-module.html
https://mesonbuild.com/Dependencies.html#cuda
executable('myprog', 'myprog.cu')

I'm not a CUDA user but I know we have a bunch of happy users of our CUDA support.

Torch (and specifically its C++ ABI) is not CUDA -- it's possible to have integrated support for the latter and not the former, clearly.

@rgommers
Comment options

CUDA support in Meson seems to be missing compute capability 8.9, 9, 10, 12 in the docs; implementation looks partial (itdoes have 8.9 and 9.0, but nothing beyond that). CUDA 13 was also just released and needs to be addedhere.

I had wanted to poke at this PyTorch support a bit (since I do work on and with PyTorch), but I am unlikely to get around to it soon unfortunately.

Comment options

Quick question: how did you set up your environment and builtextension-cpp-meson? I suspect this is caused by doingpip install . (or similar) without adding--no-build-isolation, which may cause the paths to the PyTorch libraries to be pointing at temporary build directories.

You must be logged in to vote
10 replies
@bumi001
Comment options

I am wondering if one of you (@rgommers@eli-schwartz ) could point me to how do I create a ninja rule like the following using meson:

rule compile  command = $cxx -MMD -MF $out.d $cflags -c $in -o $out $post_cflags  depfile = $out.d  deps = gcc
@bumi001
Comment options

@daknuett, I am wondering what is your setup? After cloning extension-cpp-meson, I created a virtual environment inside the extension-cpp-meson directory, installed torch, numpy, meson-python, ninja. After that I tried the following command as per your README.

python -m pip install --no-build-isolation .

I get the following error.

  ../meson.build:27:17: ERROR: Tried to form an absolute path to a dir in the source tree.  You should not do that but use relative paths instead, for  directories that are part of your project.

I am wondering how did you not encounter that?

@eli-schwartz
Comment options

It should suffice to add the post_cflags via meson's support forcpp_args on a specific target, or viaadd_project_arguments().

@bumi001
Comment options

Turns out I didn't need to use cpp_args or add_project_arguments(). But I needed to add the following to the meson.build that@daknuett has.

min_supported_cpython = run_command(python, ['-c', 'import torch.utils.cpp_extension; print(torch.utils.cpp_extension.min_supported_cpython)'], check: true).stdout().strip()add_global_arguments('-D_Py_LIMITED_API=@0@'.format(min_supported_cpython), language: 'cpp')

Now, nm produces what I have been seeking:

nm -u --demangle _C.cpython-313-x86_64-linux-gnu.so | grep DispatchKey                 U c10::impl::ExcludeDispatchKeyGuard::ExcludeDispatchKeyGuard(c10::DispatchKeySet)                 U c10::impl::ExcludeDispatchKeyGuard::~ExcludeDispatchKeyGuard()                 U torch::Library::Library(torch::Library::Kind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >,std::optional<c10::DispatchKey>, char const*, unsigned int)
Answer selected bybumi001
@daknuett
Comment options

Thank you for your response@daknuett. I still face the unresolved symbol issue. However, I think I found the reason for the undefined symbol.

First, in site_packages/extension_cpp, I used nm to find:

site-packages/extension_cpp$ nm -u --demangle _C.cpython-313-x86_64-linux-gnu.so | grep DispatchKey | grep -v Guard                 U torch::Library::Library(torch::Library::Kind, std::string, std::optional<c10::DispatchKey>, char const*, unsigned int)

Then, in site_packages/torch/lib, I used nm to find:

site-packages/torch/lib$ nm -a --demangle * | grep 'torch::Library::Library'nm: libgomp.so.1: no symbols00000000018eebd0 T torch::Library::Library(torch::Library::Kind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::optional<c10::DispatchKey>, char const*, unsigned int)00000000018eebd0 T torch::Library::Library(torch::Library::Kind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::optional<c10::DispatchKey>, char const*, unsigned int)0000000000ec5c72 t torch::Library::Library(torch::Library::Kind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::optional<c10::DispatchKey>, char const*, unsigned int) [clone .cold]                 U torch::Library::Library(torch::Library::Kind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::optional<c10::DispatchKey>, char const*, unsigned int)                 U torch::Library::Library(torch::Library::Kind, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::optional<c10::DispatchKey>, char const*, unsigned int)

This appears to be related to incompatible ABIs (pytorch/pytorch#13541).

In meson.build, you have:

cxx11abi = run_command(python, ['-c', 'import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)']).stdout().strip()add_global_arguments('-D_GLIBCXX_USE_CXX11_ABI=@0@'.format(cxx11abi), language: 'cpp')

I am not sure why I still have the unresolved symbol issue.

I can confirm that I get the same error as you. I am working on it.

@daknuett
Comment options

min_supported_cpython = run_command(python, ['-c', 'import torch.utils.cpp_extension; print(torch.utils.cpp_extension.min_supported_cpython)'], check: true).stdout().strip()
add_global_arguments('-D_Py_LIMITED_API=@0@'.format(min_supported_cpython), language: 'cpp')

I cannot confirm that. Instead I found that

diff --git a/meson.build b/meson.buildindex 5b612e7..1f26452 100644--- a/meson.build+++ b/meson.build@@ -63,8 +63,9 @@ endforeach # Defines that are required for torch compatibility                              #  # ABI compatibility-cxx11abi = run_command(python, ['-c', 'import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)']).stdout().strip()-add_global_arguments('-D_GLIBCXX_USE_CXX11_ABI=@0@'.format(cxx11abi), language: 'cpp')+cxx11abi = run_command(python, ['-c', 'import torch; print(torch._C._GLIBCXX_USE_CXX11_ABI)'], check: true).stdout().strip()+cxx11abi = cxx11abi.to_lower().contains('true') ? 1 : 0+add_project_arguments('-D_GLIBCXX_USE_CXX11_ABI=@0@'.format(cxx11abi), language: 'cpp')

Resolves the ABI issues.

Comment options

I did a clean install again and tried with the following change to meson.build.

cxx11abi = cxx11abi.to_lower().contains('true') ? 1 : 0

It still produces the same unresolved symbol error. I hard-coded cxx11abi to 0. It resulted in the same error.

You must be logged in to vote
0 replies
Comment options

I am using python-3.13, g++-13 on ubuntu-24.04 (through WSL2). Torch version is 2.7.1.

You must be logged in to vote
0 replies
Comment options

@daknuett,

I added 'cuda' to the project like so:

project('qcd_ml_accel', 'cpp', 'cuda', version:'0.0.1'  , default_options : ['cpp_std=c++17'])

Then, I added the cuda source to the list of sources passed to python.extension_module, like so:

    , ['extension_cpp/csrc/muladd.cpp', 'extension_cpp/csrc/cuda/muladd.cu']

Now all 8 tests pass.

You must be logged in to vote
2 replies
@daknuett
Comment options

Do you have a repository where I can look at this? Of course Cuda support is very much appreciated.

@bumi001
Comment options

I do and I did submit a PR to you.

https://github.com/bumi001/extension-cpp-meson-cuda

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Category
Q&A
Labels
None yet
4 participants
@bumi001@rgommers@eli-schwartz@daknuett

[8]ページ先頭

©2009-2025 Movatter.jp