Movatterモバイル変換


[0]ホーム

URL:


cppreference.com
Namespaces
Variants
    Actions

      std::memory_order

      From cppreference.com
      <cpp‎ |atomic
       
       
      Concurrency support library
      Threads
      (C++11)
      (C++20)
      this_thread namespace
      (C++11)
      (C++11)
      (C++11)
      Cooperative cancellation
      Mutual exclusion
      Generic lock management
      (C++11)
      (C++11)
      (C++11)
      (C++11)
      Condition variables
      (C++11)
      Semaphores
      Latches and Barriers
      (C++20)
      (C++20)
      Futures
      (C++11)
      (C++11)
      (C++11)
      Safe reclamation
      Hazard pointers
      Atomic types
      (C++11)
      (C++20)
      Initialization of atomic types
      (C++11)(deprecated in C++20)
      (C++11)(deprecated in C++20)
      Memory ordering
      memory_order
      (C++11)
      (C++11)(deprecated in C++26)
      Free functions for atomic operations
      Free functions for atomic flags
       
      Defined in header<atomic>
      enum memory_order

      {
          memory_order_relaxed,
          memory_order_consume,
          memory_order_acquire,
          memory_order_release,
          memory_order_acq_rel,
          memory_order_seq_cst

      };
      (since C++11)
      (until C++20)
      enumclass memory_order:/* unspecified */

      {
          relaxed, consume, acquire, release, acq_rel, seq_cst
      };
      inlineconstexpr memory_order memory_order_relaxed= memory_order::relaxed;
      inlineconstexpr memory_order memory_order_consume= memory_order::consume;
      inlineconstexpr memory_order memory_order_acquire= memory_order::acquire;
      inlineconstexpr memory_order memory_order_release= memory_order::release;
      inlineconstexpr memory_order memory_order_acq_rel= memory_order::acq_rel;

      inlineconstexpr memory_order memory_order_seq_cst= memory_order::seq_cst;
      (since C++20)

      std::memory_order specifies how memory accesses, including regular, non-atomic memory accesses, are to be ordered around an atomic operation. Absent any constraints on a multi-core system, when multiple threads simultaneously read and write to several variables, one thread can observe the values change in an order different from the order another thread wrote them. Indeed, the apparent order of changes can even differ among multiple reader threads. Some similar effects can occur even on uniprocessor systems due to compiler transformations allowed by the memory model.

      The default behavior of all atomic operations in the library provides forsequentially consistent ordering (see discussion below). That default can hurt performance, but the library's atomic operations can be given an additionalstd::memory_order argument to specify the exact constraints, beyond atomicity, that the compiler and processor must enforce for that operation.

      Contents

      [edit]Constants

      Defined in header<atomic>
      Name Meaning
      memory_order_relaxed Relaxed operation: there are no synchronization or ordering constraints imposed on other reads or writes, only this operation's atomicity is guaranteed (seeRelaxed ordering below).
      memory_order_consume
      (deprecated in C++26)
      A load operation with this memory order performs aconsume operation on the affected memory location: no reads or writes in the current thread dependent on the value currently loaded can be reordered before this load. Writes to data-dependent variables in other threads that release the same atomic variable are visible in the current thread. On most platforms, this affects compiler optimizations only (seeRelease-Consume ordering below).
      memory_order_acquire A load operation with this memory order performs theacquire operation on the affected memory location: no reads or writes in the current thread can be reordered before this load. All writes in other threads that release the same atomic variable are visible in the current thread (seeRelease-Acquire ordering below).
      memory_order_release A store operation with this memory order performs therelease operation: no reads or writes in the current thread can be reordered after this store. All writes in the current thread are visible in other threads that acquire the same atomic variable (seeRelease-Acquire ordering below) and writes that carry a dependency into the atomic variable become visible in other threads that consume the same atomic (seeRelease-Consume ordering below).
      memory_order_acq_rel A read-modify-write operation with this memory order is both anacquire operation and arelease operation. No memory reads or writes in the current thread can be reordered before the load, nor after the store. All writes in other threads that release the same atomic variable are visible before the modification and the modification is visible in other threads that acquire the same atomic variable.
      memory_order_seq_cst A load operation with this memory order performs anacquire operation, a store performs arelease operation, and read-modify-write performs both anacquire operation and arelease operation, plus a single total order exists in which all threads observe all modifications in the same order (seeSequentially-consistent ordering below).

      [edit]Formal description

      Inter-thread synchronization and memory ordering determine howevaluations andside effects of expressions are ordered between different threads of execution. They are defined in the following terms:

      [edit]Sequenced-before

      Within the same thread, evaluation A may besequenced-before evaluation B, as described inevaluation order.

      Carries dependency

      Within the same thread, evaluation A that issequenced-before evaluation B may also carry a dependency into B (that is, B depends on A), if any of the following is true:

      1) The value of A is used as an operand of B,except
      a) if B is a call tostd::kill_dependency,
      b) if A is the left operand of the built-in&&,||,?:, or, operators.
      2) A writes to a scalar object M, B reads from M.
      3) A carries dependency into another evaluation X, and X carries dependency into B.
      (until C++26)

      [edit]Modification order

      All modifications to any particular atomic variable occur in a total order that is specific to this one atomic variable.

      The following four requirements are guaranteed for all atomic operations:

      1)Write-write coherence: If evaluation A that modifies some atomic M (a write)happens-before evaluation B that modifies M, then A appears earlier than B in themodification order of M.
      2)Read-read coherence: if a value computation A of some atomic M (a read)happens-before a value computation B on M, and if the value of A comes from a write X on M, then the value of B is either the value stored by X, or the value stored by a side effect Y on M that appears later than X in themodification order of M.
      3)Read-write coherence: if a value computation A of some atomic M (a read)happens-before an operation B on M (a write), then the value of A comes from a side-effect (a write) X that appears earlier than B in themodification order of M.
      4)Write-read coherence: if a side effect (a write) X on an atomic object Mhappens-before a value computation (a read) B of M, then the evaluation B shall take its value from X or from a side effect Y that follows X in the modification order of M.

      [edit]Release sequence

      After arelease operation A is performed on an atomic object M, the longest continuous subsequence of the modification order of M that consists of:

      1) Writes performed by the same thread that performed A.
      (until C++20)
      2) Atomic read-modify-write operations made to M by any thread.

      Is known asrelease sequence headed by A.

      [edit]Synchronizes with

      If an atomic store in thread A is arelease operation, an atomic load in thread B from the same variable is anacquire operation, and the load in thread B reads a value written by the store in thread A, then the store in thread Asynchronizes-with the load in thread B.

      Also, some library calls may be defined tosynchronize-with other library calls on other threads.

      Dependency-ordered before

      Between threads, evaluation A isdependency-ordered before evaluation B if any of the following is true:

      1) A performs arelease operation on some atomic M, and, in a different thread, B performs aconsume operation on the same atomic M, and B reads a value writtenby any part of the release sequence headed(until C++20) by A.
      2) A is dependency-ordered before X and X carries a dependency into B.
      (until C++26)

      [edit]Inter-thread happens-before

      Between threads, evaluation Ainter-thread happens before evaluation B if any of the following is true:

      1) Asynchronizes-with B.
      2) A isdependency-ordered before B.
      3) Asynchronizes-with some evaluation X, and X issequenced-before B.
      4) A issequenced-before some evaluation X, and Xinter-thread happens-before B.
      5) Ainter-thread happens-before some evaluation X, and Xinter-thread happens-before B.


      Happens-before

      Regardless of threads, evaluation Ahappens-before evaluation B if any of the following is true:

      1) A issequenced-before B.
      2) Ainter-thread happens before B.

      The implementation is required to ensure that thehappens-before relation is acyclic, by introducing additional synchronization if necessary (it can only be necessary if a consume operation is involved, seeBatty et al).

      If one evaluation modifies a memory location, and the other reads or modifies the same memory location, and if at least one of the evaluations is not an atomic operation, the behavior of the program is undefined (the program has adata race) unless there exists ahappens-before relationship between these two evaluations.

      Simply happens-before

      Regardless of threads, evaluation Asimply happens-before evaluation B if any of the following is true:

      1) A issequenced-before B.
      2) Asynchronizes-with B.
      3) Asimply happens-before X, and Xsimply happens-before B.

      Note: without consume operations,simply happens-before andhappens-before relations are the same.

      (since C++20)
      (until C++26)

      Happens-before

      Regardless of threads, evaluation Ahappens-before evaluation B if any of the following is true:

      1) A issequenced-before B.
      2) Asynchronizes-with B.
      3) Ahappens-before X, and Xhappens-before B.
      (since C++26)

      [edit]Strongly happens-before

      Regardless of threads, evaluation Astrongly happens-before evaluation B if any of the following is true:

      1) A issequenced-before B.
      2) Asynchronizes-with B.
      3) Astrongly happens-before X, and Xstrongly happens-before B.
      (until C++20)
      1) A issequenced-before B.
      2) Asynchronizes with B, and both A and B are sequentially consistent atomic operations.
      3) A issequenced-before X, Xsimply(until C++26) happens-before Y, and Y issequenced-before B.
      4) Astrongly happens-before X, and Xstrongly happens-before B.

      Note: informally, if Astrongly happens-before B, then A appears to be evaluated before B in all contexts.

      Note:strongly happens-before excludes consume operations.

      (until C++26)
      (since C++20)

      [edit]Visible side-effects

      The side-effect A on a scalar M (a write) isvisible with respect to value computation B on M (a read) if both of the following are true:

      1) Ahappens-before B.
      2) There is no other side effect X to M where Ahappens-before X and Xhappens-before B.

      If side-effect A is visible with respect to the value computation B, then the longest contiguous subset of the side-effects to M, inmodification order, where B does nothappen-before it is known as thevisible sequence of side-effects (the value of M, determined by B, will be the value stored by one of these side effects).

      Note: inter-thread synchronization boils down to preventing data races (by establishing happens-before relationships) and defining which side effects become visible under what conditions.

      [edit]Consume operation

      Atomic load withmemory_order_consume or stronger is a consume operation. Note thatstd::atomic_thread_fence imposes stronger synchronization requirements than a consume operation.

      [edit]Acquire operation

      Atomic load withmemory_order_acquire or stronger is an acquire operation. Thelock() operation on aMutex is also an acquire operation. Note thatstd::atomic_thread_fence imposes stronger synchronization requirements than an acquire operation.

      [edit]Release operation

      Atomic store withmemory_order_release or stronger is a release operation. Theunlock() operation on aMutex is also a release operation. Note thatstd::atomic_thread_fence imposes stronger synchronization requirements than a release operation.

      [edit]Explanation

      [edit]Relaxed ordering

      Atomic operations taggedmemory_order_relaxed are not synchronization operations; they do not impose an order among concurrent memory accesses. They only guarantee atomicity and modification order consistency.

      For example, withx andy initially zero,

      // Thread 1:r1= y.load(std::memory_order_relaxed);// Ax.store(r1, std::memory_order_relaxed);// B// Thread 2:r2= x.load(std::memory_order_relaxed);// Cy.store(42, std::memory_order_relaxed);// D

      is allowed to producer1== r2==42 because, although A issequenced-before B within thread 1 and C issequenced before D within thread 2, nothing prevents D from appearing before A in the modification order ofy, and B from appearing before C in the modification order ofx. The side-effect of D ony could be visible to the load A in thread 1 while the side effect of B onx could be visible to the load C in thread 2. In particular, this may occur if D is completed before C in thread 2, either due to compiler reordering or at runtime.

      Even with relaxed memory model, out-of-thin-air values are not allowed to circularly depend on their own computations, for example, withx andy initially zero,

      // Thread 1:r1= y.load(std::memory_order_relaxed);if(r1==42)    x.store(r1, std::memory_order_relaxed);// Thread 2:r2= x.load(std::memory_order_relaxed);if(r2==42)    y.store(42, std::memory_order_relaxed);

      is not allowed to producer1== r2==42 since the store of42 toy is only possible if the store tox stores42, which circularly depends on the store toy storing42. Note that until C++14, this was technically allowed by the specification, but not recommended for implementors.

      (since C++14)

      Typical use for relaxed memory ordering is incrementing counters, such as the reference counters ofstd::shared_ptr, since this only requires atomicity, but not ordering or synchronization (note that decrementing thestd::shared_ptr counters requires acquire-release synchronization with the destructor).

      Run this code
      #include <atomic>#include <iostream>#include <thread>#include <vector> std::atomic<int> cnt={0}; void f(){for(int n=0; n<1000;++n)        cnt.fetch_add(1, std::memory_order_relaxed);} int main(){std::vector<std::thread> v;for(int n=0; n<10;++n)        v.emplace_back(f);for(auto& t: v)        t.join();std::cout<<"Final counter value is "<< cnt<<'\n';}

      Output:

      Final counter value is 10000

      [edit]Release-Acquire ordering

      If an atomic store in thread A is taggedmemory_order_release, an atomic load in thread B from the same variable is taggedmemory_order_acquire, and the load in thread B reads a value written by the store in thread A, then the store in thread Asynchronizes-with the load in thread B.

      All memory writes (including non-atomic and relaxed atomic) thathappened-before the atomic store from the point of view of thread A, becomevisible side-effects in thread B. That is, once the atomic load is completed, thread B is guaranteed to see everything thread A wrote to memory. This promise only holds if B actually returns the value that A stored, or a value from later in the release sequence.

      The synchronization is established only between the threadsreleasing andacquiring the same atomic variable. Other threads can see different order of memory accesses than either or both of the synchronized threads.

      On strongly-ordered systems — x86, SPARC TSO, IBM mainframe, etc. — release-acquire ordering is automatic for the majority of operations. No additional CPU instructions are issued for this synchronization mode; only certain compiler optimizations are affected (e.g., the compiler is prohibited from moving non-atomic stores past the atomic store-release or performing non-atomic loads earlier than the atomic load-acquire). On weakly-ordered systems (ARM, Itanium, PowerPC), special CPU load or memory fence instructions are used.

      Mutual exclusion locks, such asstd::mutex oratomic spinlock, are an example of release-acquire synchronization: when the lock is released by thread A and acquired by thread B, everything that took place in the critical section (before the release) in the context of thread A has to be visible to thread B (after the acquire) which is executing the same critical section.

      Run this code
      #include <atomic>#include <cassert>#include <string>#include <thread> std::atomic<std::string*> ptr;int data; void producer(){std::string* p= newstd::string("Hello");    data=42;    ptr.store(p, std::memory_order_release);} void consumer(){std::string* p2;while(!(p2= ptr.load(std::memory_order_acquire)));assert(*p2=="Hello");// never firesassert(data==42);// never fires} int main(){std::thread t1(producer);std::thread t2(consumer);    t1.join(); t2.join();}

      The following example demonstrates transitive release-acquire ordering across three threads, using a release sequence.

      Run this code
      #include <atomic>#include <cassert>#include <thread>#include <vector> std::vector<int> data;std::atomic<int> flag={0}; void thread_1(){    data.push_back(42);    flag.store(1, std::memory_order_release);} void thread_2(){int expected=1;// memory_order_relaxed is okay because this is an RMW,// and RMWs (with any ordering) following a release form a release sequencewhile(!flag.compare_exchange_strong(expected,2, std::memory_order_relaxed)){        expected=1;}} void thread_3(){while(flag.load(std::memory_order_acquire)<2);// if we read the value 2 from the atomic flag, we see 42 in the vectorassert(data.at(0)==42);// will never fire} int main(){std::thread a(thread_1);std::thread b(thread_2);std::thread c(thread_3);    a.join(); b.join(); c.join();}

      [edit]Release-Consume ordering

      If an atomic store in thread A is taggedmemory_order_release, an atomic load in thread B from the same variable is taggedmemory_order_consume, and the load in thread B reads a value written by the store in thread A, then the store in thread A isdependency-ordered before the load in thread B.

      All memory writes (non-atomic and relaxed atomic) thathappened-before the atomic store from the point of view of thread A, becomevisible side-effects within those operations in thread B into which the load operationcarries dependency, that is, once the atomic load is completed, those operators and functions in thread B that use the value obtained from the load are guaranteed to see what thread A wrote to memory.

      The synchronization is established only between the threadsreleasing andconsuming the same atomic variable. Other threads can see different order of memory accesses than either or both of the synchronized threads.

      On all mainstream CPUs other than DEC Alpha, dependency ordering is automatic, no additional CPU instructions are issued for this synchronization mode, only certain compiler optimizations are affected (e.g. the compiler is prohibited from performing speculative loads on the objects that are involved in the dependency chain).

      Typical use cases for this ordering involve read access to rarely written concurrent data structures (routing tables, configuration, security policies, firewall rules, etc) and publisher-subscriber situations with pointer-mediated publication, that is, when the producer publishes a pointer through which the consumer can access information: there is no need to make everything else the producer wrote to memory visible to the consumer (which may be an expensive operation on weakly-ordered architectures). An example of such scenario isrcu_dereference.

      See alsostd::kill_dependency and[[carries_dependency]] for fine-grained dependency chain control.

      Note that currently (2/2015) no known production compilers track dependency chains: consume operations are lifted to acquire operations.

      (until C++26)

      The specification of release-consume ordering is being revised, and the use ofmemory_order_consume is temporarily discouraged.

      (since C++17)
      (until C++26)

      Release-consume ordering has the same effect as release-acquire ordering and is deprecated.

      (since C++26)

      This example demonstrates dependency-ordered synchronization for pointer-mediated publication: the integer data is not related to the pointer to string by a data-dependency relationship, thus its value is undefined in the consumer.

      Run this code
      #include <atomic>#include <cassert>#include <string>#include <thread> std::atomic<std::string*> ptr;int data; void producer(){std::string* p= newstd::string("Hello");    data=42;    ptr.store(p, std::memory_order_release);} void consumer(){std::string* p2;while(!(p2= ptr.load(std::memory_order_consume)));assert(*p2=="Hello");// never fires: *p2 carries dependency from ptrassert(data==42);// may or may not fire: data does not carry dependency from ptr} int main(){std::thread t1(producer);std::thread t2(consumer);    t1.join(); t2.join();}


      [edit]Sequentially-consistent ordering

      Atomic operations taggedmemory_order_seq_cst not only order memory the same way as release/acquire ordering (everything thathappened-before a store in one thread becomes avisible side effect in the thread that did a load), but also establish asingle total modification order of all atomic operations that are so tagged.

      Formally,

      eachmemory_order_seq_cst operation B that loads from atomic variable M, observes one of the following:

      • the result of the last operation A that modified M, which appears before B in the single total order,
      • OR, if there was such an A, B may observe the result of some modification on M that is notmemory_order_seq_cst and does nothappen-before A,
      • OR, if there wasn't such an A, B may observe the result of some unrelated modification of M that is notmemory_order_seq_cst.

      If there was amemory_order_seq_cststd::atomic_thread_fence operation Xsequenced-before B, then B observes one of the following:

      • the lastmemory_order_seq_cst modification of M that appears before X in the single total order,
      • some unrelated modification of M that appears later in M's modification order.

      For a pair of atomic operations on M called A and B, where A writes and B reads M's value, if there are twomemory_order_seq_cststd::atomic_thread_fences X and Y, and if A issequenced-before X, Y issequenced-before B, and X appears before Y in the Single Total Order, then B observes either:

      • the effect of A,
      • some unrelated modification of M that appears after A in M's modification order.

      For a pair of atomic modifications of M called A and B, B occurs after A in M's modification order if

      • there is amemory_order_seq_cststd::atomic_thread_fence X such that A issequenced-before X and X appears before B in the Single Total Order,
      • or, there is amemory_order_seq_cststd::atomic_thread_fence Y such that Y issequenced-before B and A appears before Y in the Single Total Order,
      • or, there arememory_order_seq_cststd::atomic_thread_fences X and Y such that A issequenced-before X, Y issequenced-before B, and X appears before Y in the Single Total Order.

      Note that this means that:

      1) as soon as atomic operations that are not taggedmemory_order_seq_cst enter the picture, the sequential consistency is lost,
      2) the sequentially-consistent fences are only establishing total ordering for the fences themselves, not for the atomic operations in the general case (sequenced-before is not a cross-thread relationship, unlikehappens-before).
      (until C++20)
      Formally,

      an atomic operation A on some atomic object M iscoherence-ordered-before another atomic operation B on M if any of the following is true:

      1) A is a modification, and B reads the value stored by A,
      2) A precedes B in themodification order of M,
      3) A reads the value stored by an atomic modification X, X precedes B in themodification order, and A and B are not the same atomic read-modify-write operation,
      4) A iscoherence-ordered-before X, and X iscoherence-ordered-before B.

      There is a single total order S on allmemory_order_seq_cst operations, including fences, that satisfies the following constraints:

      1) if A and B arememory_order_seq_cst operations, and Astrongly happens-before B, then A precedes B in S,
      2) for every pair of atomic operations A and B on an object M, where A iscoherence-ordered-before B:
      a) if A and B are bothmemory_order_seq_cst operations, then A precedes B in S,
      b) if A is amemory_order_seq_cst operation, and Bhappens-before amemory_order_seq_cst fence Y, then A precedes Y in S,
      c) if amemory_order_seq_cst fence Xhappens-before A, and B is amemory_order_seq_cst operation, then X precedes B in S,
      d) if amemory_order_seq_cst fence Xhappens-before A, and Bhappens-before amemory_order_seq_cst fence Y, then X precedes Y in S.

      The formal definition ensures that:

      1) the single total order is consistent with themodification order of any atomic object,
      2) amemory_order_seq_cst load gets its value either from the lastmemory_order_seq_cst modification, or from some non-memory_order_seq_cst modification that does nothappen-before precedingmemory_order_seq_cst modifications.

      The single total order might not be consistent withhappens-before. This allows more efficient implementation ofmemory_order_acquire andmemory_order_release on some CPUs. It can produce surprising results whenmemory_order_acquire andmemory_order_release are mixed withmemory_order_seq_cst.

      For example, withx andy initially zero,

      // Thread 1:x.store(1, std::memory_order_seq_cst);// Ay.store(1, std::memory_order_release);// B// Thread 2:r1= y.fetch_add(1, std::memory_order_seq_cst);// Cr2= y.load(std::memory_order_relaxed);// D// Thread 3:y.store(3, std::memory_order_seq_cst);// Er3= x.load(std::memory_order_seq_cst);// F

      is allowed to producer1==1&& r2==3&& r3==0, where Ahappens-before C, but C precedes A in the single total order C-E-F-A ofmemory_order_seq_cst (seeLahav et al).

      Note that:

      1) as soon as atomic operations that are not taggedmemory_order_seq_cst enter the picture, the sequential consistency guarantee for the program is lost,
      2) in many cases,memory_order_seq_cst atomic operations are reorderablewith respect to other atomic operations performed by the same thread.
      (since C++20)

      Sequential ordering may be necessary for multiple producer-multiple consumer situations where all consumers must observe the actions of all producers occurring in the same order.

      Total sequential ordering requires a full memory fence CPU instruction on all multi-core systems. This may become a performance bottleneck since it forces the affected memory accesses to propagate to every core.

      This example demonstrates a situation where sequential ordering is necessary. Any other ordering may trigger the assert because it would be possible for the threadsc andd to observe changes to the atomicsx andy in opposite order.

      Run this code
      #include <atomic>#include <cassert>#include <thread> std::atomic<bool> x={false};std::atomic<bool> y={false};std::atomic<int> z={0}; void write_x(){    x.store(true, std::memory_order_seq_cst);} void write_y(){    y.store(true, std::memory_order_seq_cst);} void read_x_then_y(){while(!x.load(std::memory_order_seq_cst));if(y.load(std::memory_order_seq_cst))++z;} void read_y_then_x(){while(!y.load(std::memory_order_seq_cst));if(x.load(std::memory_order_seq_cst))++z;} int main(){std::thread a(write_x);std::thread b(write_y);std::thread c(read_x_then_y);std::thread d(read_y_then_x);    a.join(); b.join(); c.join(); d.join();assert(z.load()!=0);// will never happen}

      [edit]Relationship withvolatile

      Within a thread of execution, accesses (reads and writes) throughvolatile glvalues cannot be reordered past observable side-effects (including other volatile accesses) that aresequenced-before orsequenced-after within the same thread, but this order is not guaranteed to be observed by another thread, since volatile access does not establish inter-thread synchronization.

      In addition, volatile accesses are not atomic (concurrent read and write is adata race) and do not order memory (non-volatile memory accesses may be freely reordered around the volatile access).

      One notable exception is Visual Studio, where, with default settings, every volatile write has release semantics and every volatile read has acquire semantics (Microsoft Docs), and thus volatiles may be used for inter-thread synchronization. Standardvolatile semantics are not applicable to multi-threaded programming, although they are sufficient for e.g. communication with astd::signal handler that runs in the same thread when applied tosig_atomic_t variables. The compiler option/volatile:iso can be used to restore behavior consistent with the standard, which is the default setting when the target platform is ARM.

      [edit]See also

      C documentation formemory order

      [edit]External links

      1. MOESI protocol
      2. x86-TSO: A Rigorous and Usable Programmer’s Model for x86 Multiprocessors P. Sewell et. al., 2010
      3. A Tutorial Introduction to the ARM and POWER Relaxed Memory Models P. Sewell et al, 2012
      4. MESIF: A Two-Hop Cache Coherency Protocol for Point-to-Point Interconnects J.R. Goodman, H.H.J. Hum, 2009
      5. Memory Models Russ Cox, 2021
      This section is incomplete
      Reason: Let's find good refs on QPI, MOESI, and maybe Dragon.
      Retrieved from "https://en.cppreference.com/mwiki/index.php?title=cpp/atomic/memory_order&oldid=182081"

      [8]ページ先頭

      ©2009-2025 Movatter.jp