Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Distributed Arrays in Julia

License

NotificationsYou must be signed in to change notification settings

JuliaParallel/DistributedArrays.jl

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build Status

Distributed Arrays for Julia

NOTEDistributed Arrays will only work on Julia v0.4.0 or later.

DArrays have been removed from Julia Base library in v0.4 so it is now necessary to import theDistributedArrays package on all spawned processes.

@everywhereusing DistributedArrays

Distributed Arrays

Large computations are often organized around large arrays of data. Inthese cases, a particularly natural way to obtain parallelism is todistribute arrays among several processes. This combines the memoryresources of multiple machines, allowing use of arrays too large to fiton one machine. Each process operates on the part of the array itowns, providing a ready answer to the question of how a program shouldbe divided among machines.

Julia distributed arrays are implemented by theDArray type. ADArray has an element type and dimensions just like anArray.ADArray can also use arbitrary array-like types to represent the localchunks that store actual data. The data in aDArray is distributed bydividing the index space into some number of blocks in each dimension.

Common kinds of arrays can be constructed with functions beginning withd:

dzeros(100,100,10)dones(100,100,10)drand(100,100,10)drandn(100,100,10)dfill(x,100,100,10)

In the last case, each element will be initialized to the specifiedvaluex. These functions automatically pick a distribution for you.For more control, you can specify which processes to use, and how thedata should be distributed:

dzeros((100,100),workers()[1:4], [1,4])

The second argument specifies that the array should be created on the firstfour workers. When dividing data among a large number of processes,one often sees diminishing returns in performance. PlacingDArray\ son a subset of processes allows multipleDArray computations tohappen at once, with a higher ratio of work to communication on eachprocess.

The third argument specifies a distribution; the nth element ofthis array specifies how many pieces dimension n should be divided into.In this example the first dimension will not be divided, and the seconddimension will be divided into 4 pieces. Therefore each local chunk will beof size(100,25). Note that the product of the distribution array mustequal the number of processes.

  • distribute(a::Array) converts a local array to a distributed array.

  • localpart(a::DArray) obtains the locally-stored portionof aDArray.

  • localindexes(a::DArray) gives a tuple of the index ranges owned by thelocal process.

  • convert(Array, a::DArray) brings all the data to the local process.

Indexing aDArray (square brackets) with ranges of indexes alwayscreates aSubArray, not copying any data.

Constructing Distributed Arrays

The primitiveDArray constructor has the following somewhat elaborate signature:

DArray(init, dims[, procs, dist])

init is a function that accepts a tuple of index ranges. This function shouldallocate a local chunk of the distributed array and initialize it for the specifiedindices.dims is the overall size of the distributed array.procs optionally specifies a vector of process IDs to use.dist is an integer vector specifying how many chunks thedistributed array should be divided into in each dimension.

The last two arguments are optional, and defaults will be used if theyare omitted.

As an example, here is how to turn the local array constructorfillinto a distributed array constructor:

dfill(v, args...)=DArray(I->fill(v,map(length,I)), args...)

In this case theinit function only needs to callfill with thedimensions of the local piece it is creating.

DArrays can also be constructed from multidimensionalArray comprehensions withthe@DArray macro syntax. This syntax is just sugar for the primitiveDArray constructor:

julia> [i+jfor i=1:5, j=1:5]5x5 Array{Int64,2}:23456345674567856789678910julia>@DArray [i+jfor i=1:5, j=1:5]5x5 DistributedArrays.DArray{Int64,2,Array{Int64,2}}:23456345674567856789678910

Distributed Array Operations

At this time, distributed arrays do not have much functionality. Theirmajor utility is allowing communication to be done via array indexing, whichis convenient for many problems. As an example, consider implementing the"life" cellular automaton, where each cell in a grid is updated accordingto its neighboring cells. To compute a chunk of the result of one iteration,each process needs the immediate neighbor cells of its local chunk. Thefollowing code accomplishes this::

functionlife_step(d::DArray)DArray(size(d),procs(d))do I            top=mod(first(I[1])-2,size(d,1))+1            bot=mod(last(I[1])  ,size(d,1))+1            left=mod(first(I[2])-2,size(d,2))+1            right=mod(last(I[2])  ,size(d,2))+1            old=Array(Bool,length(I[1])+2,length(I[2])+2)            old[1      ,1      ]= d[top , left]# left side            old[2:end-1,1      ]= d[I[1], left]            old[end    ,1      ]= d[bot , left]            old[1      ,2:end-1]= d[top , I[2]]            old[2:end-1,2:end-1]= d[I[1], I[2]]# middle            old[end    ,2:end-1]= d[bot , I[2]]            old[1      ,end    ]= d[top , right]# right side            old[2:end-1,end    ]= d[I[1], right]            old[end    ,end    ]= d[bot , right]life_rule(old)endend

As you can see, we use a series of indexing expressions to fetchdata into a local arrayold. Note that thedo block syntax isconvenient for passinginit functions to theDArray constructor.Next, the serial functionlife_rule is called to apply the update rulesto the data, yielding the neededDArray chunk. Nothing aboutlife_ruleisDArray\ -specific, but we list it here for completeness::

functionlife_rule(old)        m, n=size(old)        new=similar(old, m-2, n-2)for j=2:n-1for i=2:m-1                nc=+(old[i-1,j-1], old[i-1,j], old[i-1,j+1],                       old[i  ,j-1],             old[i  ,j+1],                       old[i+1,j-1], old[i+1,j], old[i+1,j+1])                new[i-1,j-1]= (nc==3|| nc==2&& old[i,j])endend        newend

Numerical Results of Distributed Computations

Floating point arithmetic is not associative and this comes upwhen performing distributed computations overDArrays. AllDArrayoperations are performed over thelocalpart chunks and then aggregated.The change in ordering of the operations will change the numeric result asseen in this simple example:

julia>addprocs(8);julia>@everywhereusing DistributedArraysjulia> A=fill(1.1, (100,100));julia>sum(A)11000.000000000013julia> DA=distribute(A);julia>sum(DA)11000.000000000127julia>sum(A)==sum(DA)false

The ultimate ordering of operations will be dependent on how the Array is distributed.

Garbage Collection and DArrays

When a DArray is constructed (typically on the master process), the returned DArray objects stores information on how thearray is distributed, which procesor holds which indexes and so on. When the DArray objecton the master process is garbage collected, all particpating workers are notified andlocalparts of the DArray freed on each worker.

Since the size of the DArray object itself is small, a problem arises asgc on the master faces no memory pressure tocollect the DArray immediately. This results in a delay of the memory being released on the participating workers.

Therefore it is highly recommended to explcitly callclose(d::DArray) as soon as user codehas finished working with the distributed array.

It is also important to note that the localparts of the DArray is collected from all particpating workerswhen the DArray object on the process creating the DArray is collected. It is therefore important to maintaina reference to a DArray object on the creating process for as long as it is being computed upon.

darray_closeall() is another useful function to manage distributed memory. It releases all darrays created fromthe calling process, including any temporaries created during computation.

About

Distributed Arrays in Julia

Resources

License

Stars

Watchers

Forks

Sponsor this project

    Packages

    No packages published

    Contributors30

    Languages


    [8]ページ先頭

    ©2009-2025 Movatter.jp