Base.Threads.@threads
—MacroThreads.@threads [schedule] for ... end
A macro to execute afor
loop in parallel. The iteration space is distributed to coarse-grained tasks. This policy can be specified by theschedule
argument. The execution of the loop waits for the evaluation of all iterations.
See also:@spawn
andpmap
inDistributed
.
Extended help
Semantics
Unless stronger guarantees are specified by the scheduling option, the loop executed by@threads
macro have the following semantics.
The@threads
macro executes the loop body in an unspecified order and potentially concurrently. It does not specify the exact assignments of the tasks and the worker threads. The assignments can be different for each execution. The loop body code (including any code transitively called from it) must not make any assumptions about the distribution of iterations to tasks or the worker thread in which they are executed. The loop body for each iteration must be able to make forward progress independent of other iterations and be free from data races. As such, invalid synchronizations across iterations may deadlock while unsynchronized memory accesses may result in undefined behavior.
For example, the above conditions imply that:
Channel
s is incorrect.:static
schedule is used, the value ofthreadid()
may change even within a single iteration. SeeTask Migration
.Schedulers
Without the scheduler argument, the exact scheduling is unspecified and varies across Julia releases. Currently,:dynamic
is used when the scheduler is not specified.
Theschedule
argument is available as of Julia 1.5.
:dynamic
(default)
:dynamic
scheduler executes iterations dynamically to available worker threads. Current implementation assumes that the workload for each iteration is uniform. However, this assumption may be removed in the future.
This scheduling option is merely a hint to the underlying execution mechanism. However, a few properties can be expected. The number ofTask
s used by:dynamic
scheduler is bounded by a small constant multiple of the number of available worker threads (Threads.threadpoolsize()
). Each task processes contiguous regions of the iteration space. Thus,@threads :dynamic for x in xs; f(x); end
is typically more efficient than@sync for x in xs; @spawn f(x); end
iflength(xs)
is significantly larger than the number of the worker threads and the run-time off(x)
is relatively smaller than the cost of spawning and synchronizing a task (typically less than 10 microseconds).
The:dynamic
option for theschedule
argument is available and the default as of Julia 1.8.
:greedy
:greedy
scheduler spawns up toThreads.threadpoolsize()
tasks, each greedily working on the given iterated values as they are produced. As soon as one task finishes its work, it takes the next value from the iterator. Work done by any individual task is not necessarily on contiguous values from the iterator. The given iterator may produce values forever, only the iterator interface is required (no indexing).
This scheduling option is generally a good choice if the workload of individual iterations is not uniform/has a large spread.
The:greedy
option for theschedule
argument is available as of Julia 1.11.
:static
:static
scheduler creates one task per thread and divides the iterations equally among them, assigning each task specifically to each thread. In particular, the value ofthreadid()
is guaranteed to be constant within one iteration. Specifying:static
is an error if used from inside another@threads
loop or from a thread other than 1.
:static
scheduling exists for supporting transition of code written before Julia 1.3. In newly written library functions,:static
scheduling is discouraged because the functions using this option cannot be called from arbitrary worker threads.
Examples
To illustrate of the different scheduling strategies, consider the following functionbusywait
containing a non-yielding timed loop that runs for a given number of seconds.
julia> function busywait(seconds) tstart = time_ns() while (time_ns() - tstart) / 1e9 < seconds end endjulia> @time begin Threads.@spawn busywait(5) Threads.@threads :static for i in 1:Threads.threadpoolsize() busywait(1) end end6.003001 seconds (16.33 k allocations: 899.255 KiB, 0.25% compilation time)julia> @time begin Threads.@spawn busywait(5) Threads.@threads :dynamic for i in 1:Threads.threadpoolsize() busywait(1) end end2.012056 seconds (16.05 k allocations: 883.919 KiB, 0.66% compilation time)
The:dynamic
example takes 2 seconds since one of the non-occupied threads is able to run two of the 1-second iterations to complete the for loop.
Base.Threads.foreach
—FunctionThreads.foreach(f, channel::Channel; schedule::Threads.AbstractSchedule=Threads.FairSchedule(), ntasks=Threads.threadpoolsize())
Similar toforeach(f, channel)
, but iteration overchannel
and calls tof
are split acrossntasks
tasks spawned byThreads.@spawn
. This function will wait for all internally spawned tasks to complete before returning.
Ifschedule isa FairSchedule
,Threads.foreach
will attempt to spawn tasks in a manner that enables Julia's scheduler to more freely load-balance work items across threads. This approach generally has higher per-item overhead, but may perform better thanStaticSchedule
in concurrence with other multithreaded workloads.
Ifschedule isa StaticSchedule
,Threads.foreach
will spawn tasks in a manner that incurs lower per-item overhead thanFairSchedule
, but is less amenable to load-balancing. This approach thus may be more suitable for fine-grained, uniform workloads, but may perform worse thanFairSchedule
in concurrence with other multithreaded workloads.
Examples
julia> n = 20julia> c = Channel{Int}(ch -> foreach(i -> put!(ch, i), 1:n), 1)julia> d = Channel{Int}(n) do ch f = i -> put!(ch, i^2) Threads.foreach(f, c) endjulia> collect(d)collect(d) = [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324, 361, 400]
This function requires Julia 1.6 or later.
Base.Threads.@spawn
—MacroThreads.@spawn [:default|:interactive] expr
Create aTask
andschedule
it to run on any available thread in the specified threadpool (:default
if unspecified). The task is allocated to a thread once one becomes available. To wait for the task to finish, callwait
on the result of this macro, or callfetch
to wait and then obtain its return value.
Values can be interpolated into@spawn
via$
, which copies the value directly into the constructed underlying closure. This allows you to insert thevalue of a variable, isolating the asynchronous code from changes to the variable's value in the current task.
The thread that the task runs on may change if the task yields, thereforethreadid()
should not be treated as constant for a task. SeeTask Migration
, and the broadermulti-threading manual for further important caveats. See also the chapter onthreadpools.
This macro is available as of Julia 1.3.
Interpolating values via$
is available as of Julia 1.4.
A threadpool may be specified as of Julia 1.9.
Examples
julia> t() = println("Hello from ", Threads.threadid());julia> tasks = fetch.([Threads.@spawn t() for i in 1:4]);Hello from 1Hello from 1Hello from 3Hello from 4
Base.Threads.threadid
—FunctionThreads.threadid() -> Int
Get the ID number of the current thread of execution. The master thread has ID1
.
Examples
julia> Threads.threadid()1julia> Threads.@threads for i in 1:4 println(Threads.threadid()) end4254
The thread that a task runs on may change if the task yields, which is known asTask Migration
. For this reason in most cases it is not safe to usethreadid()
to index into, say, a vector of buffer or stateful objects.
Base.Threads.maxthreadid
—FunctionThreads.maxthreadid() -> Int
Get a lower bound on the number of threads (across all thread pools) available to the Julia process, with atomic-acquire semantics. The result will always be greater than or equal tothreadid()
as well asthreadid(task)
for any task you were able to observe before callingmaxthreadid
.
Base.Threads.nthreads
—FunctionThreads.nthreads(:default | :interactive) -> Int
Get the current number of threads within the specified thread pool. The threads in:interactive
have id numbers1:nthreads(:interactive)
, and the threads in:default
have id numbers innthreads(:interactive) .+ (1:nthreads(:default))
.
See alsoBLAS.get_num_threads
andBLAS.set_num_threads
in theLinearAlgebra
standard library, andnprocs()
in theDistributed
standard library andThreads.maxthreadid()
.
Base.Threads.threadpool
—FunctionThreads.threadpool(tid = threadid()) -> Symbol
Returns the specified thread's threadpool; either:default
,:interactive
, or:foreign
.
Base.Threads.nthreadpools
—FunctionThreads.nthreadpools() -> Int
Returns the number of threadpools currently configured.
Base.Threads.threadpoolsize
—FunctionThreads.threadpoolsize(pool::Symbol = :default) -> Int
Get the number of threads available to the default thread pool (or to the specified thread pool).
See also:BLAS.get_num_threads
andBLAS.set_num_threads
in theLinearAlgebra
standard library, andnprocs()
in theDistributed
standard library.
Base.Threads.ngcthreads
—FunctionThreads.ngcthreads() -> Int
Returns the number of GC threads currently configured. This includes both mark threads and concurrent sweep threads.
See alsoMulti-Threading.
atomic
—KeywordUnsafe pointer operations are compatible with loading and storing pointers declared with_Atomic
andstd::atomic
type in C11 and C++23 respectively. An error may be thrown if there is not support for atomically loading the Julia typeT
.
See also:unsafe_load
,unsafe_modify!
,unsafe_replace!
,unsafe_store!
,unsafe_swap!
Base.@atomic
—Macro@atomic var@atomic order ex
Markvar
orex
as being performed atomically, ifex
is a supported expression. If noorder
is specified it defaults to :sequentially_consistent.
@atomic a.b.x = new@atomic a.b.x += addend@atomic :release a.b.x = new@atomic :acquire_release a.b.x += addend
Perform the store operation expressed on the right atomically and return the new value.
With=
, this operation translates to asetproperty!(a.b, :x, new)
call. With any operator also, this operation translates to amodifyproperty!(a.b, :x, +, addend)[2]
call.
@atomic a.b.x max arg2@atomic a.b.x + arg2@atomic max(a.b.x, arg2)@atomic :acquire_release max(a.b.x, arg2)@atomic :acquire_release a.b.x + arg2@atomic :acquire_release a.b.x max arg2
Perform the binary operation expressed on the right atomically. Store the result into the field in the first argument and return the values(old, new)
.
This operation translates to amodifyproperty!(a.b, :x, func, arg2)
call.
SeePer-field atomics section in the manual for more details.
Examples
julia> mutable struct Atomic{T}; @atomic x::T; endjulia> a = Atomic(1)Atomic{Int64}(1)julia> @atomic a.x # fetch field x of a, with sequential consistency1julia> @atomic :sequentially_consistent a.x = 2 # set field x of a, with sequential consistency2julia> @atomic a.x += 1 # increment field x of a, with sequential consistency3julia> @atomic a.x + 1 # increment field x of a, with sequential consistency3 => 4julia> @atomic a.x # fetch field x of a, with sequential consistency4julia> @atomic max(a.x, 10) # change field x of a to the max value, with sequential consistency4 => 10julia> @atomic a.x max 5 # again change field x of a to the max value, with sequential consistency10 => 10
This functionality requires at least Julia 1.7.
Base.@atomicswap
—Macro@atomicswap a.b.x = new@atomicswap :sequentially_consistent a.b.x = new
Storesnew
intoa.b.x
and returns the old value ofa.b.x
.
This operation translates to aswapproperty!(a.b, :x, new)
call.
SeePer-field atomics section in the manual for more details.
Examples
julia> mutable struct Atomic{T}; @atomic x::T; endjulia> a = Atomic(1)Atomic{Int64}(1)julia> @atomicswap a.x = 2+2 # replace field x of a with 4, with sequential consistency1julia> @atomic a.x # fetch field x of a, with sequential consistency4
This functionality requires at least Julia 1.7.
Base.@atomicreplace
—Macro@atomicreplace a.b.x expected => desired@atomicreplace :sequentially_consistent a.b.x expected => desired@atomicreplace :sequentially_consistent :monotonic a.b.x expected => desired
Perform the conditional replacement expressed by the pair atomically, returning the values(old, success::Bool)
. Wheresuccess
indicates whether the replacement was completed.
This operation translates to areplaceproperty!(a.b, :x, expected, desired)
call.
SeePer-field atomics section in the manual for more details.
Examples
julia> mutable struct Atomic{T}; @atomic x::T; endjulia> a = Atomic(1)Atomic{Int64}(1)julia> @atomicreplace a.x 1 => 2 # replace field x of a with 2 if it was 1, with sequential consistency(old = 1, success = true)julia> @atomic a.x # fetch field x of a, with sequential consistency2julia> @atomicreplace a.x 1 => 2 # replace field x of a with 2 if it was 1, with sequential consistency(old = 2, success = false)julia> xchg = 2 => 0; # replace field x of a with 0 if it was 2, with sequential consistencyjulia> @atomicreplace a.x xchg(old = 2, success = true)julia> @atomic a.x # fetch field x of a, with sequential consistency0
This functionality requires at least Julia 1.7.
Base.@atomiconce
—Macro@atomiconce a.b.x = value@atomiconce :sequentially_consistent a.b.x = value@atomiconce :sequentially_consistent :monotonic a.b.x = value
Perform the conditional assignment of value atomically if it was previously unset, returning the valuesuccess::Bool
. Wheresuccess
indicates whether the assignment was completed.
This operation translates to asetpropertyonce!(a.b, :x, value)
call.
SeePer-field atomics section in the manual for more details.
Examples
julia> mutable struct AtomicOnce @atomic x AtomicOnce() = new() endjulia> a = AtomicOnce()AtomicOnce(#undef)julia> @atomiconce a.x = 1 # set field x of a to 1, if unset, with sequential consistencytruejulia> @atomic a.x # fetch field x of a, with sequential consistency1julia> @atomiconce a.x = 1 # set field x of a to 1, if unset, with sequential consistencyfalse
This functionality requires at least Julia 1.11.
Core.AtomicMemory
—TypeAtomicMemory{T} == GenericMemory{:atomic, T, Core.CPU}
Fixed-sizeDenseVector{T}
. Access to its any of its elements is performed atomically (with:monotonic
ordering). Setting any of the elements must be accomplished using the@atomic
macro and explicitly specifying ordering.
Each element is independently atomic when accessed, and cannot be set non-atomically. Currently the@atomic
macro and higher level interface have not been completed, but the building blocks for a future implementation are the internal intrinsicsCore.memoryrefget
,Core.memoryrefset!
,Core.memoryref_isassigned
,Core.memoryrefswap!
,Core.memoryrefmodify!
, andCore.memoryrefreplace!
.
For details, seeAtomic Operations
This type requires Julia 1.11 or later.
There are also optional memory ordering parameters for theunsafe
set of functions, that select the C/C++-compatible versions of these atomic operations, if that parameter is specified tounsafe_load
,unsafe_store!
,unsafe_swap!
,unsafe_replace!
, andunsafe_modify!
.
The following APIs are deprecated, though support for them is likely to remain for several releases.
Base.Threads.Atomic
—TypeThreads.Atomic{T}
Holds a reference to an object of typeT
, ensuring that it is only accessed atomically, i.e. in a thread-safe manner.
Only certain "simple" types can be used atomically, namely the primitive boolean, integer, and float-point types. These areBool
,Int8
...Int128
,UInt8
...UInt128
, andFloat16
...Float64
.
New atomic objects can be created from a non-atomic values; if none is specified, the atomic object is initialized with zero.
Atomic objects can be accessed using the[]
notation:
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> x[] = 11julia> x[]1
Atomic operations use anatomic_
prefix, such asatomic_add!
,atomic_xchg!
, etc.
Base.Threads.atomic_cas!
—FunctionThreads.atomic_cas!(x::Atomic{T}, cmp::T, newval::T) where T
Atomically compare-and-setx
Atomically compares the value inx
withcmp
. If equal, writenewval
tox
. Otherwise, leavesx
unmodified. Returns the old value inx
. By comparing the returned value tocmp
(via===
) one knows whetherx
was modified and now holds the new valuenewval
.
For further details, see LLVM'scmpxchg
instruction.
This function can be used to implement transactional semantics. Before the transaction, one records the value inx
. After the transaction, the new value is stored only ifx
has not been modified in the mean time.
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> Threads.atomic_cas!(x, 4, 2);julia> xBase.Threads.Atomic{Int64}(3)julia> Threads.atomic_cas!(x, 3, 2);julia> xBase.Threads.Atomic{Int64}(2)
Base.Threads.atomic_xchg!
—FunctionThreads.atomic_xchg!(x::Atomic{T}, newval::T) where T
Atomically exchange the value inx
Atomically exchanges the value inx
withnewval
. Returns theold value.
For further details, see LLVM'satomicrmw xchg
instruction.
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> Threads.atomic_xchg!(x, 2)3julia> x[]2
Base.Threads.atomic_add!
—FunctionThreads.atomic_add!(x::Atomic{T}, val::T) where T <: ArithmeticTypes
Atomically addval
tox
Performsx[] += val
atomically. Returns theold value. Not defined forAtomic{Bool}
.
For further details, see LLVM'satomicrmw add
instruction.
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> Threads.atomic_add!(x, 2)3julia> x[]5
Base.Threads.atomic_sub!
—FunctionThreads.atomic_sub!(x::Atomic{T}, val::T) where T <: ArithmeticTypes
Atomically subtractval
fromx
Performsx[] -= val
atomically. Returns theold value. Not defined forAtomic{Bool}
.
For further details, see LLVM'satomicrmw sub
instruction.
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> Threads.atomic_sub!(x, 2)3julia> x[]1
Base.Threads.atomic_and!
—FunctionThreads.atomic_and!(x::Atomic{T}, val::T) where T
Atomically bitwise-andx
withval
Performsx[] &= val
atomically. Returns theold value.
For further details, see LLVM'satomicrmw and
instruction.
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> Threads.atomic_and!(x, 2)3julia> x[]2
Base.Threads.atomic_nand!
—FunctionThreads.atomic_nand!(x::Atomic{T}, val::T) where T
Atomically bitwise-nand (not-and)x
withval
Performsx[] = ~(x[] & val)
atomically. Returns theold value.
For further details, see LLVM'satomicrmw nand
instruction.
Examples
julia> x = Threads.Atomic{Int}(3)Base.Threads.Atomic{Int64}(3)julia> Threads.atomic_nand!(x, 2)3julia> x[]-3
Base.Threads.atomic_or!
—FunctionThreads.atomic_or!(x::Atomic{T}, val::T) where T
Atomically bitwise-orx
withval
Performsx[] |= val
atomically. Returns theold value.
For further details, see LLVM'satomicrmw or
instruction.
Examples
julia> x = Threads.Atomic{Int}(5)Base.Threads.Atomic{Int64}(5)julia> Threads.atomic_or!(x, 7)5julia> x[]7
Base.Threads.atomic_xor!
—FunctionThreads.atomic_xor!(x::Atomic{T}, val::T) where T
Atomically bitwise-xor (exclusive-or)x
withval
Performsx[] $= val
atomically. Returns theold value.
For further details, see LLVM'satomicrmw xor
instruction.
Examples
julia> x = Threads.Atomic{Int}(5)Base.Threads.Atomic{Int64}(5)julia> Threads.atomic_xor!(x, 7)5julia> x[]2
Base.Threads.atomic_max!
—FunctionThreads.atomic_max!(x::Atomic{T}, val::T) where T
Atomically store the maximum ofx
andval
inx
Performsx[] = max(x[], val)
atomically. Returns theold value.
For further details, see LLVM'satomicrmw max
instruction.
Examples
julia> x = Threads.Atomic{Int}(5)Base.Threads.Atomic{Int64}(5)julia> Threads.atomic_max!(x, 7)5julia> x[]7
Base.Threads.atomic_min!
—FunctionThreads.atomic_min!(x::Atomic{T}, val::T) where T
Atomically store the minimum ofx
andval
inx
Performsx[] = min(x[], val)
atomically. Returns theold value.
For further details, see LLVM'satomicrmw min
instruction.
Examples
julia> x = Threads.Atomic{Int}(7)Base.Threads.Atomic{Int64}(7)julia> Threads.atomic_min!(x, 5)7julia> x[]5
Base.Threads.atomic_fence
—FunctionThreads.atomic_fence()
Insert a sequential-consistency memory fence
Inserts a memory fence with sequentially-consistent ordering semantics. There are algorithms where this is needed, i.e. where an acquire/release ordering is insufficient.
This is likely a very expensive operation. Given that all other atomic operations in Julia already have acquire/release semantics, explicit fences should not be necessary in most cases.
For further details, see LLVM'sfence
instruction.
Base.@threadcall
—Macro@threadcall((cfunc, clib), rettype, (argtypes...), argvals...)
The@threadcall
macro is called in the same way asccall
but does the work in a different thread. This is useful when you want to call a blocking C function without causing the currentjulia
thread to become blocked. Concurrency is limited by size of the libuv thread pool, which defaults to 4 threads but can be increased by setting theUV_THREADPOOL_SIZE
environment variable and restarting thejulia
process.
Note that the called function should never call back into Julia.
These building blocks are used to create the regular synchronization objects.
Base.Threads.SpinLock
—TypeSpinLock()
Create a non-reentrant, test-and-test-and-set spin lock. Recursive use will result in a deadlock. This kind of lock should only be used around code that takes little time to execute and does not block (e.g. perform I/O). In general,ReentrantLock
should be used instead.
Eachlock
must be matched with anunlock
. If!islocked(lck::SpinLock)
holds,trylock(lck)
succeeds unless there are other tasks attempting to hold the lock "at the same time."
Test-and-test-and-set spin locks are quickest up to about 30ish contending threads. If you have more contention than that, different synchronization approaches should be considered.
Settings
This document was generated withDocumenter.jl version 1.8.0 onWednesday 9 July 2025. Using Julia version 1.11.6.