Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Julia interface to the Elemental linear algebra library.

License

NotificationsYou must be signed in to change notification settings

JuliaParallel/Elemental.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A package for dense and sparse distributed linear algebra and optimization. The underlying functionality is provided by the C++ libraryElemental written originally byJack Poulson and now maintained by LLNL.

Installation

The package is installed withPkg.add("Elemental"). For Julia versions 1.3 and later, Elemental uses the binaries provided by BinaryBuilder, which are linked against the MPI (mpich) provided through BinaryBuilder.

Examples

Each of these examples should be run in a separate Julia session.

SVD example

This example runs on a single processor, and initializes MPI under the hood. However, explicit use of MPI.jl is not required in this case, compared to the other examples below.

julia>using LinearAlgebra, Elementaljulia> A= Elemental.Matrix(Float64)0x0 Elemental.Matrix{Float64}julia> Elemental.gaussian!(A,100,80);julia> U, s, V=svd(A);julia>convert(Matrix{Float64}, s)[1:10]10-element Array{Float64,1}:19.898918.270217.366517.047516.451316.319716.098915.835315.594715.5079

SVD example using MPI to parallelize on 4 processors

In this example,@mpi_do has to be used to send the parallel instructions to all processors.

julia>using MPI, MPIClusterManagers, Distributedjulia> man=MPIManager(np=4);julia>addprocs(man);julia>@everywhereusing LinearAlgebra, Elementaljulia>@mpi_do man A= Elemental.DistMatrix(Float64);julia>@mpi_do man Elemental.gaussian!(A,1000,800);julia>@mpi_do man U, s, V=svd(A);julia>@mpi_do manprintln(s[1])    From worker5:59.639990420817696    From worker4:59.639990420817696    From worker2:59.639990420817696    From worker3:59.639990420817696

SVD example with DistributedArrays on 4 processors

This example is slightly different from the ones above in that it only calculates the singular values. However,it uses the DistributedArrays.jl package, and has a single thread of control. Note, we do not need to use@mpi_doexplicitly in this case.

julia>using MPI, MPIClusterManagers, Distributedjulia> man=MPIManager(np=4);julia>addprocs(man);julia>using DistributedArrays, Elementaljulia> A=drandn(1000,800);julia> Elemental.svdvals(A)[1:5]5-element SubArray{Float64,1,DistributedArrays.DArray{Float64,2,Array{Float64,2}},Tuple{UnitRange{Int64}},0}:59.464959.198459.030958.717858.389

Truncated SVD

The iterative SVD algorithm is implemented in pure Julia, but the factorized matrix as well as the Lanczos vectors are stored as distributed matrices in Elemental. Notice, thatTSVD.jl doesn't depend on Elemental and is only usingElemental.jl through generic function calls.

julia>using MPI, MPIClusterManagers, Distributedjulia> man=MPIManager(np=4);julia>addprocs(man);julia>@mpi_do manusing Elemental, TSVD, Randomjulia>@mpi_do man A= Elemental.DistMatrix(Float64);julia>@mpi_do man Elemental.gaussian!(A,5000,2000);julia>@mpi_do man Random.seed!(123)# to avoid different initial vectors on the workersjulia>@mpi_do man r=tsvd(A,5);julia>@mpi_do manprintln(r[2][1:5])    From worker3:  [1069.6059089732858,115.44260091060129,115.08319164529792,114.87007788947226,114.48092348847719]    From worker5:  [1069.6059089732858,115.44260091060129,115.08319164529792,114.87007788947226,114.48092348847719]    From worker2:  [1069.6059089732858,115.44260091060129,115.08319164529792,114.87007788947226,114.48092348847719]    From worker4:  [1069.6059089732858,115.44260091060129,115.08319164529792,114.87007788947226,114.48092348847719]

Linear Regression

@mpi_do man A= Elemental.DistMatrix(Float32)@mpi_do man B= Elemental.DistMatrix(Float32)@mpi_do mancopyto!(A, Float32[21;12])@mpi_do mancopyto!(B, Float32[4,5])

Run distributed ridge regression ½|A*X-B|₂² + λ|X|₂²

@mpi_do man X= Elemental.ridge(A, B,0f0)

Run distributed lasso regression ½|A*X-B|₂² + λ|X|₁ (only supported in recent versions of Elemental)

@mpi_do man X= Elemental.bpdn(A, B,0.1f0)

Coverage

Right now, the best way to see if a specific function is available, is to look through the source code. We are looking for help to prepare Documenter.jl based documentation for this package, and also to add more functionality from the Elemental library.

About

Julia interface to the Elemental linear algebra library.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp