Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Metal programming in Julia

License

NotificationsYou must be signed in to change notification settings

JuliaGPU/Metal.jl

Repository files navigation

Metal programming in Julia

With Metal.jl it's possible to program GPUs on macOS using the Metal programmingframework.

The package is a work-in-progress. There are bugs, functionality is missing,and performance hasn't been optimized. Expect to have to make changes to this packageif you want to use it. PRs are very welcome!

Requirements

  • Mac device with M-series chip
  • Julia 1.10-1.11
  • macOS 13-15

These requirements are fairly strict, and are due to our limited developmentresources (manpower, hardware). Technically, they can be relaxed. If you areinterested in contributing to this, seethisissue for more details.In practice, Metal.jl will probably work on any macOS 10.15+, and otherGPUs that are supported by Metal might also function (if only partially),but such combinations are unsupported for now.

Quick start

Metal.jl can be installed with the Julia package manager. From the Julia REPL, type] toenter the Pkg REPL mode and run:

pkg> add Metal

Or, equivalently, via thePkg API:

julia>import Pkg; Pkg.add("Metal")

For an overview of the toolchain in use, you can run the following command afterimporting the package:

julia>using Metaljulia> Metal.versioninfo()macOS 15.3.0, Darwin 24.3.0Toolchain:- Julia: 1.11.3- LLVM: 16.0.6Julia packages:- Metal.jl: 1.5.1- GPUArrays: 11.2.1- GPUCompiler: 1.1.0- KernelAbstractions: 0.9.33- ObjectiveC: 3.3.0- LLVM: 9.2.0- LLVMDowngrader_jll: 0.6.0+01 device:- Apple M2 Max (64.000 KiB allocated)

Array abstraction

The easiest way to work with Metal.jl, is by using its array abstraction.TheMtlArray type is both meant to be a convenient container for devicememory, as well as provide a data-parallel abstraction for using the GPUwithout writing your own kernels:

julia> a=MtlArray([1])1-element MtlArray{Int64, 1}: 1julia> a.+11-element MtlArray{Int64, 1}: 2

Kernel programming

The above array abstractions are all implemented using Metal kernels writtenin Julia. These kernels follow a similar programming style to Julia's otherGPU back-ends, and with that deviate from how kernels are implemented in Metal C(i.e., indexing intrinsics are functions not arguments, arbitrary aggregate argumentsare supported, etc):

julia>functionvadd(a, b, c)           i = thread_position_in_grid_1d()           c[i] = a[i] + b[i]           return       endvadd (generic function with 1 method)julia> a=MtlArray([1,1,1,1]); b=MtlArray([2,2,2,2]); c=similar(a);julia>@metal threads=2 groups=2vadd(a, b, c)julia>Array(c)4-element Vector{Int64}: 3 3 3 3

Metal API wrapper

Finally, all of the above functionality is made possible by interfacing with the Metallibraries throughObjectiveC.jl. We provide low-level objects and functions that map Theselow-level API wrappers, along with some slightly higher-level Julia wrappers, are availablein theMTL submodule exported by Metal.jl:

julia> dev= Metal.MTL.devices()[1]<AGXG13XDevice: 0x14c17f200>    name = Apple M1 Projulia> dev.nameNSString("Apple M1 Pro")

Acknowledgements

This package builds upon the experience of severalJulia contributors toCUDA.jl,AMDGPU.jl andoneAPI.jl.


[8]ページ先頭

©2009-2025 Movatter.jp