Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

🧠⚙️ Standalone native implementation of the Web Neural Network API

License

NotificationsYou must be signed in to change notification settings

webmachinelearning/webnn-native

Repository files navigation

Backend \ OSWindowsLinux
null (for unit test)null backend
DirectMLXDirectMLX backend (Windows)
Node Binding (DirectMLX backend / Windows)
Memory leak check - DirectMLX backend (Windows)
OpenVINOOpenVINO backend (Windows)
Node Binding (OpenVINO backend / Windows)
OpenVINO backend (Linux)
Node Binding (OpenVINO backend / Linux)
XNNPACKXNNPACK backend (Windows)XNNPACK backend (Linux)
oneDNNoneDNN backend (Windows)oneDNN backend (Linux)
MLASMLAS backend (Windows)

clang format

WebNN-native

WebNN-native is a native implementation of theWeb Neural Network API.

It provides several building blocks:

  • WebNN C/C++ headers that applications and other building blocks use.
    • Thewebnn.h that is an one-to-one mapping with the WebNN IDL.
    • A C++ wrapper for thewebnn.h
  • Backend implementations that use platforms' ML APIs:
    • DirectML on Windows 10
    • DirectMLX on Windows 10
    • OpenVINO on Windows 10 and Linux
    • oneDNN on Windows 10 and Linux
    • XNNPACK on Windows 10 and Linux
    • MLAS on Windows 10 and Linux
    • Other backends are to be added

WebNN-native uses the code of other open source projects:

  • The code generator and infrastructure code ofDawn project.
  • The DirectMLX and device wrapper ofDirectML project.
  • TheXNNPACK project.
  • TheoneDNN project.
  • TheMLAS project.

Build and Run

Installdepot_tools

WebNN-native uses the Chromium build system and dependency management so you need toinstall depot_tools and add it to the PATH.

Notes:

  • On Windows, you'll need to set the environment variableDEPOT_TOOLS_WIN_TOOLCHAIN=0. This tells depot_tools to use your locally installed version of Visual Studio (by default, depot_tools will try to download a Google-internal version).

Get the code

Get the source code as follows:

# Clone the repo as "webnn-native"> git clone https://github.com/webmachinelearning/webnn-native.git webnn-native&&cd webnn-native# Bootstrap the gclient configuration> cp scripts/standalone.gclient .gclient# Fetch external dependencies and toolchains with gclient> gclient sync

Setting up the build

Generate build files usinggn args out/Debug orgn args out/Release.

A text editor will appear asking build options, the most common option isis_debug=true/false; otherwisegn args out/Release --list shows all the possible options.

To build with a backend, please set the corresponding option from following table.

BackendOption
DirectMLwebnn_enable_dml=true
DirectMLXwebnn_enable_dmlx=true
OpenVINOwebnn_enable_openvino=true
XNNPACKwebnn_enable_xnnpack=true
oneDNNwebnn_enable_onednn=true
MLASwebnn_enable_mlas=true

Build

Then useninja -C out/Release orninja -C out/Debug to build WebNN-native.

Notes

  • To build with XNNPACK backend, please build XNNPACK first, e.g. by./scripts/build-local.sh. For Windows build, it requires supplying -DCMAKE_MSVC_RUNTIME_LIBRARY="MultiThreaded$<$CONFIG:Debug:Debug>" to set MSVC static runtime library.
  • To build with oneDNN backend, please build oneDNN first by following thebuild from source instructions.
  • To build with MLAS backend, please build MLAS (part of ONNX Runtime) first by following theBuild ONNX Runtime for inferencing, e.g., by.\build.bat --config Release --parallel --enable_msvc_static_runtime for Windows build.

Run tests

Run unit tests:

> ./out/Release/webnn_unittests

Run end2end tests on a default device:

> ./out/Release/webnn_end2end_tests

You can also specify a device to run end2end tests using "-d" option, for example:

> ./out/Release/webnn_end2end_tests -d gpu

Currently "cpu", "gpu" and "default" are supported, more devices are to be supported in the future.

Notes:

Run examples

License

Apache 2.0 Public License, please seeLICENSE.

About

🧠⚙️ Standalone native implementation of the Web Neural Network API

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors13


[8]ページ先頭

©2009-2025 Movatter.jp