- Notifications
You must be signed in to change notification settings - Fork82
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
I am simply trying to get the code fromhttps://github.com/daknuett/extension-cpp-meson to install to site_packages and run. It builds and installs but can't resolve symbols when I try to import extension_cpp. Using ldd shows: Both libc10.so and libtorch_cpu.so are available in .venv/lib/python3.13/site-packages/torch/lib How do I change meson.build so that _C.cpython-313-x86_64-linux-gnu.so contains the correct path for libc10.so and libtorch_cpu.so? PS: The unresolved symbol is the same one as in this discussion: |
BetaWas this translation helpful?Give feedback.
All reactions
Turns out I didn't need to use cpp_args or add_project_arguments(). But I needed to add the following to the meson.build that@daknuett has.
min_supported_cpython = run_command(python, ['-c', 'import torch.utils.cpp_extension; print(torch.utils.cpp_extension.min_supported_cpython)'], check: true).stdout().strip()add_global_arguments('-D_Py_LIMITED_API=@0@'.format(min_supported_cpython), language: 'cpp')Now, nm produces what I have been seeking:
nm -u --demangle _C.cpython-313-x86_64-linux-gnu.so | grep DispatchKey U c10::impl::ExcludeDispatchKeyGuard::ExcludeDispatchKeyGuard(c10::DispatchKeySet) U c10::impl::ExcludeDispatchKeyGuard::~ExcludeDispatchKey…Replies: 5 comments 18 replies
-
Broadly speaking, this is impossible for library technology to do in the first place. libtorch_cpu.so etc can be in any directory, because pytorch itself could be installed to any site-packages directory in You need indirection for this, in the form of e.g. capsule imports, but that assumes that the project in question (pytorch, here) supports such usage. The other approach to this is that your extension is "broken" by default but works when imported by a python module that first dynamically computes the search path and then sets things up. I don't know what pytorch users usually do (I don't use pytorch myself). It's possible that some solutions might be "good enough", e.g. hardcoding the absolute path on the build machine, which means all users must compile their own copy of the package. You could also assume that people don't install to multiple sys.path directories (they actually do, but maybe the users of any given package don't) and simply hardcode an rpath of Does pytorch not provide any guidance for this? |
BetaWas this translation helpful?Give feedback.
All reactions
-
PyTorch officially only supports setuptools. They are trying to move away from it. You may be interested in their analysis of meson and scikit-build-core atpytorch/pytorch#157807. |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
That is for building PyTorch itself, which is quite different from "build an extension module against the PyTorch C++ API". For the latter, multiple build systems are supported and any build system that sets up the right compile and link flags should work. The question is just how easy it is, which varies:
I think there's not much Meson usage because of the lack of solid CUDA support; CMake is pretty much the standard there. For CPU it shouldn't be too difficult. I like the approach that I'll try to find time to test and see what the problem is. Also Cc@daknuett for visibility (I hope you don't mind the ping). |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
I agree with you that the approach of daknuett is a better approach. I hard-coded just to see if it would work without the --no-build-extension flag. Using the meson.build as is from daknuett and with the --no-build-extension flag also produces the same undefined symbol error. I am also trying to figure out why this is happening. I am wondering what kind of support meson provides for CUDA currently? |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
https://mesonbuild.com/Cuda-module.html I'm not a CUDA user but I know we have a bunch of happy users of our CUDA support. Torch (and specifically its C++ ABI) is not CUDA -- it's possible to have integrated support for the latter and not the former, clearly. |
BetaWas this translation helpful?Give feedback.
All reactions
-
CUDA support in Meson seems to be missing compute capability 8.9, 9, 10, 12 in the docs; implementation looks partial (itdoes have 8.9 and 9.0, but nothing beyond that). CUDA 13 was also just released and needs to be addedhere. I had wanted to poke at this PyTorch support a bit (since I do work on and with PyTorch), but I am unlikely to get around to it soon unfortunately. |
BetaWas this translation helpful?Give feedback.
All reactions
-
Quick question: how did you set up your environment and built |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
I am wondering if one of you (@rgommers@eli-schwartz ) could point me to how do I create a ninja rule like the following using meson: |
BetaWas this translation helpful?Give feedback.
All reactions
-
@daknuett, I am wondering what is your setup? After cloning extension-cpp-meson, I created a virtual environment inside the extension-cpp-meson directory, installed torch, numpy, meson-python, ninja. After that I tried the following command as per your README. I get the following error. I am wondering how did you not encounter that? |
BetaWas this translation helpful?Give feedback.
All reactions
-
It should suffice to add the post_cflags via meson's support for |
BetaWas this translation helpful?Give feedback.
All reactions
-
Turns out I didn't need to use cpp_args or add_project_arguments(). But I needed to add the following to the meson.build that@daknuett has. Now, nm produces what I have been seeking: |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
I can confirm that I get the same error as you. I am working on it. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I cannot confirm that. Instead I found that Resolves the ABI issues. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I did a clean install again and tried with the following change to meson.build. It still produces the same unresolved symbol error. I hard-coded cxx11abi to 0. It resulted in the same error. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I am using python-3.13, g++-13 on ubuntu-24.04 (through WSL2). Torch version is 2.7.1. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I added 'cuda' to the project like so: Then, I added the cuda source to the list of sources passed to python.extension_module, like so: Now all 8 tests pass. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
Do you have a repository where I can look at this? Of course Cuda support is very much appreciated. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I do and I did submit a PR to you. |
BetaWas this translation helpful?Give feedback.