- Notifications
You must be signed in to change notification settings - Fork0
AsynchronousIterativeAlgorithms.jl handles the distributed asynchronous communication, so you can focus on designing your algorithm.
License
SelimChraibi/AsynchronousIterativeAlgorithms.jl
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
| Build Status | Documentation |
|---|---|
🧮AsynchronousIterativeAlgorithms.jl handles the distributed asynchronous communication, so you can focus on designing your algorithm.
💽 It also offers a convenient way to manage the distribution of your problem's data across multiple processes or remote machines.
You can installAsynchronousIterativeAlgorithms by typing
julia> ] add AsynchronousIterativeAlgorithmsSay you want to implement a distributed version ofStochastic Gradient Descent (SGD). You'll need to define:
- analgorithm structure subtyping
AbstractAlgorithm{Q,A} - theinitialization step where you compute the first iteration
- theworker step performed by the workers when they receive a query
q::Qfrom the central node - the asynchronouscentral step performed by the central node when it receives an answer
a::Afrom aworker
Let's first of all set up our distributed environment.
# Launch multiple processes (or remote machines)using Distributed;addprocs(5)# Instantiate and precompile environment in all processes@everywhere (using Pkg; Pkg.activate(@__DIR__); Pkg.instantiate(); Pkg.precompile())# You can now use AsynchronousIterativeAlgorithms@everywhere (using AsynchronousIterativeAlgorithms;const AIA= AsynchronousIterativeAlgorithms)
Now to the implementation.
# define on all processes@everywherebegin# algorithmmutable struct SGD<:AbstractAlgorithm{Vector{Float64},Vector{Float64}} stepsize::Float64 previous_q::Vector{Float64}# previous querySGD(stepsize::Float64)=new(stepsize, Float64[])end# initialisation stepfunction (sgd::SGD)(problem::Any) sgd.previous_q=rand(problem.n)end# worker stepfunction (sgd::SGD)(q::Vector{Float64}, problem::Any) sgd.stepsize* problem.∇f(q,rand(1:problem.m))end# asynchronous central stepfunction (sgd::SGD)(a::Vector{Float64}, worker::Int64, problem::Any) sgd.previous_q-= aendend
Now let's test our algorithm on a linear regression problem with mean squared error loss (LRMSE). This problem must becompatible with your algorithm. In this example, it means providing attributesn andm (dimension of the regressor and number of points), and the method∇f(x::Vector{Float64}, i::Int64) (gradient of the linear regression loss on the ith data point)
@everywherebeginstruct LRMSE A::Union{Matrix{Float64}, Nothing} b::Union{Vector{Float64}, Nothing} n::Int64 m::Int64 L::Float64# Lipschitz constant of f ∇f::FunctionendfunctionLRMSE(A::Matrix{Float64}, b::Vector{Float64}) m, n=size(A) L=maximum(A'*A)∇f(x)= A'* (A* x- b)/ n∇f(x,i)= A[i,:]* (A[i,:]'* x- b[i])LRMSE(A, b, n, m, L, ∇f)endend
We're almost ready to start the algorithm...
# Provide the stopping criteriastopat= (iteration=1000, time=42.)# Instanciate your algorithmsgd=SGD(0.01)# Create a function that returns an instance of your problem for a given pidproblem_constructor= (pid)->LRMSE(rand(42,10),rand(42))# And you can start!history=start(sgd, problem_constructor, stopat);
About
AsynchronousIterativeAlgorithms.jl handles the distributed asynchronous communication, so you can focus on designing your algorithm.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
