Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

add Jetson Orin support#467

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
thomas-hiddenpeak wants to merge1 commit intohuggingface:main
base:main
Choose a base branch
Loading
fromthomas-hiddenpeak:main

Conversation

thomas-hiddenpeak
Copy link

@thomas-hiddenpeakthomas-hiddenpeak commentedJan 4, 2025
edited
Loading

Motivation and Context

NVIDIA Jetson Orin devices have a compute capability of 8.7, which is not currently supported in the compute_cap_matching function. This PR ensures that these devices can be used with the library by adding the necessary support.

What does this PR do?

This PR adds support for NVIDIA Jetson Orin devices by including the compute capability 8.7 in thecompute_cap_matching function and updating the tests to ensure the new capability is correctly supported.

Fixes#466

Checklist

  • I have read thecontributor guidelines.
  • I have added tests to verify my changes.
  • I have tagged the appropriate reviewers.

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.

@OlivierDehaene OR@Narsil

add Jetson Orin support
@r0kk
Copy link

@HiddenPeak I am wondering if you could share reproducible steps how were you able to run text-embeddings-inference on Jetson AGX Orin. It would be greatly appreciated🙏.

Unfortunately I don't have deep enough knowledge to review you PR.

@thomas-hiddenpeak
Copy link
Author

thomas-hiddenpeak commentedJan 20, 2025
edited
Loading

@r0kk
The Jetson Orin series uses the CUDA architecture SM8.7, which is part of the Ampere architecture. Theoretically, it should be compatible with TEI. However, in practical applications, there are many incompatibilities, making direct support generally impossible. During the process of attempting to use it, I encountered the following issues:

  1. The compute_cap_matching() function does not support the SM87 architecture, so I modified the source code and recompiled it.
  2. It is necessary to ensure that the GPU driver, CUDA runtime, and CUDA compiler are correctly installed and available in the environment variable paths.(on Jetpack 6.1 with cuda 12.6)
  3. The compilation process is extremely long, and memory usage exceeds 90% (60GB).

I attempted to compile and deploy TEI on a Jetson AGX Orin 64G and found that it could not recognize SM87. Therefore, I modified the compute_cap_matching() function in backends/candle/src/compute_cap.rs to add support for the SM87 environment and architecture. Such modifications may not be effective in many cases, but fortunately, after making these changes, I was able to achieve support on the Jetson AGX Orin 64G. Not only did it not produce any errors, but it also showed excellent performance.

curl 127.0.0.1:8080/rerank \    -X POST \    -d'{"query": "What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \    -H'Content-Type: application/json'

logs

2025-01-04T20:38:18.706787Z  INFO text_embeddings_backend_candle: backends/candle/src/lib.rs:292: Starting FlashBert model on Cuda(CudaDevice(DeviceId(1)))2025-01-04T20:38:31.539445Z  INFO text_embeddings_router: router/src/lib.rs:248: Warming up model2025-01-04T20:38:32.189069Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1812: Starting HTTP server: 0.0.0.0:80802025-01-04T20:38:32.189098Z  INFO text_embeddings_router::http::server: router/src/http/server.rs:1813: Ready2025-01-04T20:44:11.047170Z  INFO rerank{total_time="177.15121ms" tokenization_time="727.783µs" queue_time="79.024583ms" inference_time="87.618256ms"}: text_embeddings_router::http::server: router/src/http/server.rs:459: Success

More screenshots are as follows:
74f61043ee2b7a9da75e93547f397b7
6519f72d3f88800fd8e89705b2d5fe1
fc2f46532bbe2545f4f7f2c55388190

Therefore, I created a branch and added test code. After testing it in my application, I submitted a merge request.
Additionally, I also tried other embedding and rerank models, which ran well.

@r0kk
Copy link

r0kk commentedFeb 6, 2025

@HiddenPeak
I can confirm that this is working onJetson AGX 64GB. Thank you very much 🙏.

thomas-hiddenpeak reacted with hooray emoji

@thomas-hiddenpeak
Copy link
Author

@HiddenPeak I can confirm that this is working onJetson AGX 64GB. Thank you very much 🙏.

It's very cool~

@taresh18-ag
Copy link

taresh18-ag commentedJul 8, 2025
edited
Loading

Hi, great work.

How did you get it running on jetson orin? When I try to compile it, it throws this error

image

these are the steps I followed:

curlhttps://sh.rustup.rs/ -sSf | sh
sudo apt-get install libssl-dev gcc -y
git clonehttps://github.com/huggingface/text-embeddings-inference.git
cd text-embeddings-inference
cargo install --path router -F candle-cuda -F http --no-default-features # getting error here

Also if cuda inference is not possible, I would like to test using cpu only. What are the steps to run this library on an arm cpu? I looked into dockerfiles, they are all dependent on intel mkl libs

@thomas-hiddenpeak
Copy link
Author

add -F dynamic-linking

@r0kk
Copy link

r0kk commentedJul 9, 2025

Following process worked for me:

AddNVCC to the path

NVIDIA's NVCC (NVIDIA CUDA Compiler) is a compiler driver used to compile CUDA (Compute Unified Device Architecture) code, which allows developers to write programs that run on NVIDIA GPUs. It translates CUDA code into executable binaries for GPU acceleration.

  1. Check ifnvcc exists
ls /usr/local/cuda/bin/nvcc
  1. Update.env variables

    • Open.bashrc

      nano~/.bashrc
    • addnvcc paths

      export PATH=/usr/local/cuda/bin:$PATHexport LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
    • restart terminal

      source~/.bashrc
    • checknvcc version

      nvcc --version

Build Process (you can skip if build exists)

We prepare the build and can be found in current repository. If it doesn't exist, you can follow instructions below:

  1. InstallRust

    curl --proto'=https' --tlsv1.2 -sSf https://sh.rustup.rs| shsource$HOME/.cargo/env
  2. Clone the Project
    It is the branch of the original project, because at the time of the writing official release forJetson family didn't exist.

    git clone https://github.com/HiddenPeak/text-embeddings-inference.gitcd text-embeddings-inference
  3. Installssl (sometimes openssl problem might appear when building)

    sudo apt install libssl-dev
  4. Build

  • move inside router dir inside the project
  • to use less space onJetson, set--target-dir to external disc
    cd router/cargo build --release --features=candle-cuda --target-dir<target dirfor generated artifact>

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers
No reviews
Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

Could not start backend on Jetson AGX Orin
3 participants
@thomas-hiddenpeak@r0kk@taresh18-ag

[8]ページ先頭

©2009-2025 Movatter.jp