- Notifications
You must be signed in to change notification settings - Fork151
Checklist for code reviews
code-review-checklists/java-concurrency
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Design
- Concurrency is rationalized?
- Can use patterns to simplify concurrency?
- Immutability/Snapshotting
- Divide and conquer
- Producer-consumer
- Instance confinement
- Thread/Task/Serial thread confinement
- Active object
- Code smells, identifying that a class or a subsystem could potentially be redesigned forbetter:
Documentation
- Thread safety is justified in comments?
- Class (method, field) has concurrent access documentation?
- Threading model of a subsystem (class) is described?
- Concurrent control flow (or data flow) of a subsystem (class) is described?
- Class is documented as immutable, thread-safe, or not thread-safe?
- Used concurrency patterns are pronounced?
ConcurrentHashMap
isnot stored in a variable ofMap
type?compute()
-like methods arenot called on a variable ofConcurrentMap
type?@GuardedBy
annotation is used?- Safety of a benign race (e. g. unbalanced synchronization) is explained?
- Each use of
volatile
is justified? - Each field that is neither
volatile
nor annotated with@GuardedBy
has a comment?
Insufficient synchronization
- Static methods and fields are thread-safe?
- Threaddoesn't wait in a loop for a non-volatile field to be updated by another thread?
- Read access to a non-volatile, concurrently updatable primitive field is protected?
- Servlets, Controllers, Filters, Handlers,
@Get
/@Post
methods are thread-safe? - Calls to
DateFormat.parse()
andformat()
are synchronized?
Excessive thread safety
- No "extra" (pseudo) thread safety?
- No atomics on which only
get()
andset()
are called? - Class (method) needs to be thread-safe?
ReentrantLock
(ReentrantReadWriteLock
,Semaphore
) needs to be fair?
Race conditions
- No
put()
orremove()
calls on aConcurrentMap
(or Cache) afterget()
orcontainsKey()
? - No point accesses to a non-thread-safe collection outside of critical sections?
- Iteration over a non-thread-safe collection doesn't leak outside of a critical section?
- A non-thread-safe collection isnot returned wrapped in
Collections.unmodifiable*()
froma getter in a thread-safe class? - A synchronized collection is not returned from a getter? in a thread-safe class?
- Non-trivial mutable object isnot returned from a getter in a thread-safe class?
- No separate getters to an atomically updated state?
- Nocheck-then-act race conditions (state used inside a critical section is read outside ofit)?
coll.toArray(new E[coll.size()])
isnot called on a synchronized collection?- No race conditions with user or programmatic input or interop between programs?
- No check-then-act race conditions with file system operations?
- No concurrent
invalidate(key)
andget()
calls on Guava's loadingCache
? Cache.put()
is not used (nor exposed in the own Cache interface)?- Concurrent invalidation race is not possible on a lazily initialized state?
- Iteration, Stream pipeline, or copying a
Collections.synchronized*()
collection is protectedby the lock? - A synchronized collection is passed into
containsAll()
,addAll()
,removeAll()
, orputAll()
under the lock?
Testing
- Considered adding multi-threaded unit tests for a thread-safe class or method?
- What is the worst thing that might happen if the code has a concurrency bug?
- A shared
Random
instance isnot used from concurrent test workers? - Concurrent test workers coordinate their start?
- There are more test threads than CPUs (if possible for the test)?
- Assertions in parallel threads and asynchronous code are handled properly?
- Checked the result of
CountDownLatch.await()
?
Locks
- Can use some concurrency utility instead of a lock with conditional
wait
(await
) calls? - Can use Guava’s
Monitor
instead of a lock with conditionalwait
(await
) calls? - Can use
synchronized
instead of aReentrantLock
? lock()
is called outside oftry {}
? No statements betweenlock()
andtry {}
?
Avoiding deadlocks
- Can avoid nested critical sections?
- Locking order for nested critical sections is documented?
- Dynamically determined locks for nested critical sections are ordered?
- No extension API calls within critical sections?
- No calls to
ConcurrentHashMap
's methods (incl.get()
) incompute()
-like lambdas on thesame map?
Improving scalability
- Critical section is as small as possible?
- Can use
ConcurrentHashMap.compute()
or Guava'sStriped
for per-key locking? - Can replace a blocking collection or a queue with a concurrent one?
- Can use
ClassValue
instead ofConcurrentHashMap<Class, ...>
? - Considered
ReadWriteLock
(orStampedLock
) instead of a simple lock? StampedLock
is used instead ofReadWriteLock
when reentrancy is not needed?- Considered
LongAdder
instead of anAtomicLong
for a "hot field"? - Considered queues from JCTools instead of the standard concurrent queues?
- Considered Caffeine cache instead of other caching libraries?
- Can apply speculation (optimistic concurrency) technique?
- Considered
ForkJoinPool
instead ofnewFixedThreadPool(N)
?
Lazy initialization and double-checked locking
- Lazy initialization of a field should be thread-safe?
- Considered double-checked locking for a lazy initialization to improve performance?
- Double-checked locking follows the SafeLocalDCL pattern?
- Considered eager initialization instead of a lazy initialization to simplify code?
- Can do lazy initialization with a benign race and without locking to improve performance?
- Holder class idiom is used for lazy static fields rather than double-checked locking?
Non-blocking and partially blocking code
- Non-blocking code has enough comments to make line-by-line checking as easy as possible?
- Can use immutable POJO + compare-and-swap operations to simplify non-blocking code?
- Boundaries of non-blocking or benignly racy code are identified with WARNING comments?
- Busy waiting (spin loop), all calls to
Thread.yield()
andThread.onSpinWait()
are justified?
Threads and Executors
- Thread is named?
- Can use
ExecutorService
instead of creating a newThread
each time some method is called? - ExecutorServices arenot created within short-lived objects (but rather reused)?
- No network I/O in a CachedThreadPool?
- No blocking (incl. I/O) operations in a
ForkJoinPool
or in a parallel Stream pipeline? - Can execute non-blocking computation in
FJP.commonPool()
instead of a custom thread pool? ExecutorService
is shut down explicitly?- Callback is attached to a
CompletableFuture
(SettableFuture
) in non-async mode only ifeither:- the callback is lightweight and non-blocking; or
- the future is completed and the callback is attached from the same thread pool?
- Adding a callback to a
CompletableFuture
(SettableFuture
) in non-async mode is justified? - Actions are delayed via a
ScheduledExecutorService
rather thanThread.sleep()
? - Checked the result of
awaitTermination()
? ExecutorService
isnot assigned into a variable ofExecutor
type?ScheduledExecutorService
isnot assigned into a variable ofExecutorService
type?
Parallel Streams
- Parallel Stream computation takes more than 100us in total?
- Comment before a parallel Streams pipeline explains how it takes more than 100us in total?
Futures
- Non-blocking computation needs to be decorated as a
Future
? - Method returning a
Future
doesn't block? - In a method returning a
Future
, considered wrapping an "expected" exception as a failedFuture
?
Thread interruption andFuture
cancellation
- Interruption status is restored before wrapping
InterruptedException
with another exception? InterruptedException
is swallowed only in the following kinds of methods:Runnable.run()
,Callable.call()
, or methods to be passed to executors as lambda tasks; or- Methods with "try" or "best effort" semantics?
InterruptedException
swallowing is documented for a method?- Can use Guava's
Uninterruptibles
to avoidInterruptedException
swallowing? Future
is canceled upon catching anInterruptedException
or aTimeoutException
onget()
?
Time
nanoTime()
values are compared in an overflow-aware manner?currentTimeMillis()
isnot used to measure time intervals and timeouts?- Units for a time variable are identified in the variable's name or via
TimeUnit
? - Negative timeouts and delays are treated as zeros?
- Tasks connected to system time or UTC time arenot scheduled using
ScheduledThreadPoolExecutor
? - Human and external interactions on consumer devices arenot scheduled using
ScheduledThreadPoolExecutor
?
ThreadLocal
ThreadLocal
can bestatic final
?- Can redesign a subsystem to avoid usage of
ThreadLocal
(esp. non-static one)? ThreadLocal
isnot used just to avoid moderate amount of allocation?- Considered replacing a non-static
ThreadLocal
with an instance-confinedMap<Thread, ...>
?
Thread safety of Cleaners and native code
close()
is concurrently idempotent in a class with aCleaner
orfinalize()
?- Method accessing native state calls
reachabilityFence()
in a class with aCleaner
orfinalize()
? Cleaner
orfinalize()
is used for real cleanup, not mere reporting?- Considered making a class with native state thread-safe?
# Dn.1. If the patch introduces a new subsystem (class, method) with concurrentcode, isthe necessity for concurrency or thread safety rationalized in the patch description?Is there a discussion of alternative design approaches that could simplify the concurrency model ofthe code (see the next item)?
A way to nudge thinking about concurrency design is demanding the usage of concurrency tools andlanguage constructs to bejustified in comments.
See also an item aboutunneeded thread-safety of classes and methods.
# Dn.2. Is it possible to apply one or several design patterns (some of them arelisted below) to significantlysimplify the concurrency model of the code, while not considerablycompromising other quality aspects, such as overall simplicity, efficiency, testability,extensibility, etc?
Immutability/Snapshotting. When some state should be updated, a new immutable object (or asnapshot within a mutable object) is created, published and used, while some concurrent threads maystill use older copies or snapshots. See [EJ Item 17], [JCIP 3.4],RC.5 andNB.2,CopyOnWriteArrayList
,CopyOnWriteArraySet
,persistent datastructures.
Divide and conquer. Work is split into several parts that are processed independently, each partin a single thread. Then the results of processing are combined.ParallelStreams orForkJoinPool
(seeTE.4 andTE.5) can be used to apply this pattern.
Producer-consumer. Pieces of work are transmitted between worker threads via queues. See[JCIP 5.3],Dl.1,CSP,SEDA.
Instance confinement. Objects of some root type encapsulate some complex hierarchical childstate. Root objects are solitarily responsible for the safety of accesses and modifications to thechild state from multiple threads. In other words, composed objects are synchronized rather thansynchronized objects are composed. See [JCIP 4.2, 10.1.3, 10.1.4].RC.3,RC.4, andRC.5 describe race conditions that could happen to instance-confined state.TL.4 touches on instance confinement of thread-local state.
Thread/Task/Serial thread confinement. Some state is made local to a thread using top-downpass-through parameters orThreadLocal
. See [JCIP 3.3] andthe checklist section aboutThreadLocals. Task confinement is a variation of theidea of thread confinement that is used in conjunction with the divide-and-conquer pattern. Itusually comes in the form of lambda-captured "context" parameters or fields in the per-thread taskobjects. Serial thread confinement is an extension of the idea of thread confinement for theproducer-consumer pattern, see [JCIP 5.3.2].
Active object. An object manages its ownExecutorService
orThread
to do the work. See thearticle on Wikipedia andTE.6.
# Dc.1. For every class, method, and field that has signs of being thread-safe,such as thesynchronized
keyword,volatile
modifiers on fields, use of any classes fromjava.util.concurrent.*
, or third-party concurrency primitives, or concurrent collections: do theirJavadoc comments include
The justification for thread safety: is it explained why a particular class, method or fieldhas to be thread-safe?
Concurrent access documentation: is it enumerated from what methods and in contexts ofwhat threads (executors, thread pools) each specific method of a thread-safe class is called?
Wherever some logic is parallelized or the execution is delegated to another thread, are therecomments explaining why it’s worse or inappropriate to execute the logic sequentially or in the samethread? SeePS.1 regarding this.
See alsoNB.3 andNB.4 regarding justification ofnon-blocking code, racy code, and busy waiting.
If the usage of concurrency tools is not justified, there is a possibility of code ending up withunnecessary thread-safety,redundant atomics,redundantvolatile
modifiers, orunneeded Futures.
# Dc.2. If the patch introduces a new subsystem that uses threads or threadpools, are therehigh-level descriptions of the threading model, the concurrent control flow (orthe data flow) of the subsystem somewhere, e. g. in the Javadoc comment for the package inpackage-info.java
or for the main class of the subsystem? Are these descriptions kept up-to-datewhen new threads or thread pools are added or some old ones deleted from the system?
Description of the threading model includes the enumeration of threads and thread pools created andmanaged in the subsystem, and external pools used in the subsystem (such asForkJoinPool.commonPool()
), their sizes and other important characteristics such as threadpriorities, and the lifecycle of the managed threads and thread pools.
A high-level description of concurrent control flow should be an overview and tie togetherconcurrent control flow documentation for individual classes, see the previous item. If theproducer-consumer pattern is used, the concurrent control flow is trivial and the data flow shouldbe documented instead.
Describing threading models and control/data flow greatly improves the maintainability of thesystem, because in the absence of descriptions or diagrams developers spend a lot of time and effortto create and refresh these models in their minds. Putting the models down also helps to discoverbottlenecks and the ways to simplify the design (seeDn.2).
# Dc.3. For classes and methods that are parts of the public API or theextensions API of the project: is it specified in their Javadoc comments whether they are (or incase of interfaces and abstract classes designed for subclassing in extensions, should they beimplemented as)immutable, thread-safe, or not thread-safe? For classes and methods that are (orshould be implemented as) thread-safe, is it documented precisely with what other methods (orthemselves) they may be called concurrently from multiple threads? See also [EJ Item 82],[JCIP 4.5],CON52-J.
If the@com.google.errorprone.annotations.Immutable
annotation is used to mark immutable classes,Error Prone static analysis tool is capable to detect when a class isnot actually immutable (see a relevantbugpattern).
# Dc.4. For subsystems, classes, methods, and fields that use some concurrencydesign patterns, either high-level, such as those mentioned inDn.2, or low-level,such asdouble-checked locking orspin looping: are the usedconcurrency patterns pronounced in the design or implementation comments for the respectivesubsystems, classes, methods, and fields? This helps readers to make sense out of the code quicker.
Pronouncing the used patterns in comments may be replaced with more succinct documentationannotations, such as@Immutable
(Dc.3),@GuardedBy
(Dc.7),@LazyInit
(LI.5), or annotations that you defineyourself for specific patterns which appear many times in your project.
# Dc.5. AreConcurrentHashMap
andConcurrentSkipListMap
objects storedin fields and variables ofConcurrentHashMap
orConcurrentSkipListMap
orConcurrentMap
type, but not justMap
?
This is important, because in code like the following:
ConcurrentMap<String, Entity> entities = getEntities();if (!entities.containsKey(key)) { entities.put(key, entity);} else { ...}
It should be pretty obvious that there might be a race condition because an entity may be put intothe map by a concurrent thread between the calls tocontainsKey()
andput()
(seeRC.1 about this type of race conditions). While if the type of the entities variablewas justMap<String, Entity>
it would be less obvious and readers might think this is onlyslightly suboptimal code and pass by.
It’s possible to turn this advice intoan inspectionin IntelliJ IDEA.
# Dc.6. An extension of the previous item: are ConcurrentHashMaps on whichcompute()
,computeIfAbsent()
,computeIfPresent()
, ormerge()
methods are called stored in fields andvariables ofConcurrentHashMap
type rather thanConcurrentMap
? This is becauseConcurrentHashMap
(unlike the genericConcurrentMap
interface) guarantees that the lambdaspassed intocompute()
-like methods are performed atomically per key, and the thread safety of theclass may depend on that guarantee.
This advice may seem to be overly pedantic, but if used in conjunction with a static analysis rulethat prohibits callingcompute()
-like methods onConcurrentMap
-typed objects that are notConcurrentHashMaps (it’s possible to create such inspection in IntelliJ IDEA too) it could preventsome bugs: e. g.callingcompute()
on aConcurrentSkipListMap
might be a race condition andit’s easy to overlook that for somebody who is used to rely on the strong semantics ofcompute()
inConcurrentHashMap
.
# Dc.7. Is@GuardedBy
annotation used? If accesses to some fields should beprotected by some lock, are those fields annotated with@GuardedBy
? Are private methods that arecalled from within critical sections in other methods annotated with@GuardedBy
? If the projectdoesn’t depend on any library containing this annotation (it’s provided byjcip-annotations
,error_prone_annotations
,jsr305
, and other libraries) andfor some reason it’s undesirable to add such dependency, it should be mentioned in Javadoc commentsfor the respective fields and methods that accesses and calls to them should be protected by somespecified locks.
See [JCIP 2.4] for more information about@GuardedBy
.
Usage of@GuardedBy
is especially beneficial in conjunction withError Prone tool which is able tostatically check for unguarded accesses to fieldsand methods with @GuardedBy annotations. There isalso an inspection "Unguarded field access" in IntelliJ IDEA with the same effect.
# Dc.8. If in a thread-safe class somefields are accessed both fromwithin critical sections and outside of critical sections, is it explained in comments why thisis safe? For example, unprotected read-only access to a reference to an immutable object might bebenignly racy (seeRC.5). Answering this question also helps to preventproblems described inIS.2,IS.3, andRC.2.
Instead of writing a comment explaining that access to alazily initialized field outside of acritical section is safe, the field could just be annotated with@LazyInit
fromerror_prone_annotations
(but makesure to read the Javadoc for this annotation and to check that the field conforms to thedescription;LI.3 andLI.5 mention potential pitfalls).
Apart from the explanations why the partially blocking or racy code is safe, there should also becomments justifying such error-prone code and warning the developers that the code should bemodified and reviewed with double attention: seeNB.3.
There is an inspection "Field accessed in both synchronized and unsynchronized contexts" in IntelliJIDEA which helps to find classes with unbalanced synchronization.
# Dc.9. Regarding every field with avolatile
modifier:does it really needto bevolatile
? Does the Javadoc comment for the field explain why the semantics ofvolatile
field reads and writes (as defined in theJava Memory Model) are required for thefield?
Similarly to what is noted in the previous item, justification for a lazily initialized field to bevolatile
could be omitted if the lazy initialization pattern itself is identified, according toDc.4. Whenvolatile
on a field is needed to ensuresafe publication ofobjects written into it (see [JCIP 3.5] orhere) orlinearizability of values observed by reader threads, then just mentioning"safe publication" or "linearizable reads" in the Javadoc comment for the field is sufficient, it'snot needed to elaborate the semantics ofvolatile
which ensure the safe publication or thelinearizability.
By extension, this item also applies when anAtomicReference
(or a primitive atomic) is usedinstead of rawvolatile
, along with the consideration aboutunnecessaryatomic which might also be relevant in this case.
# Dc.10. Is it explained in theJavadoc comment for each mutable field in athread-safe class that is neithervolatile
nor annotated with@GuardedBy
, why that is safe?Perhaps, the field is only accessed and mutated from a single method or a set of methods that arespecified to be called only from a single thread sequentially (described as perDc.1). This recommendation also applies tofinal
fields that store objects ofnon-thread-safe classes when those objects could be mutated from some methods of the enclosingthread-safe class. SeeIS.2,IS.3,RC.2,RC.3, andRC.4 about what could go wrong with such code.
# IS.1.Can non-private static methods be called concurrently from multiplethreads? If there is a non-private static field with mutable state, such as a collection, is it aninstance of a thread-safe class or synchronized using someCollections.synchronizedXxx()
method?
Note that calls toDateFormat.parse()
andformat()
must be synchronized because they mutate theobject: seeIS.5.
# IS.2. There is no situation where somethread awaits until anon-volatile field has a certain value in a loop, expecting it to be updated from another thread?The field should at least bevolatile
to ensure eventual visibility of concurrent updates. See[JCIP 3.1, 3.1.4], andVNA00-Jfor more details and examples.
Even if the respective field isvolatile
, busy waiting for a condition in a loop can be abusedeasily and therefore should be justified in a comment: seeNB.4.
Dc.10 also demands adding explaining comments to mutable fields which are neithervolatile
nor annotated with@GuardedBy
which should inevitably lead to the discovery of thevisibility issue.
# IS.3.Read accesses to non-volatile primitive fields which can beupdated concurrently are protected with a lock as well as the writes? The minimum reason for thisis that reads oflong
anddouble
fields are non-atomic (seeJLS 17.7, [JCIP 3.1.2],VNA05-J).But even with other types of fields, unbalanced synchronization creates possibilities for downstreambugs related to the visibility (see the previous item) and the lack of the expected happens-beforerelationships.
As well as with the previous item, accurate documentation of benign races(Dc.8 andDc.10) should reliably expose the cases whenunbalanced synchronization is problematic.
See alsoRC.2 regarding unbalanced synchronization of read accessesto mutable objects, such as collections.
There is a relevant inspection "Field accessed in both synchronized and unsynchronized contexts" inIntelliJ IDEA.
# IS.4.Is the business logic written for server frameworksthread-safe? This includes:
Servlet
implementations@(Rest)Controller
-annotated classes,@Get/PostMapping
-annotated methods in Spring@SessionScoped
and@ApplicationScoped
managed beans in JSFFilter
andHandler
implementations in various synchronous and asynchronous frameworks(including Jetty, Netty, Undertow)@GET
- and@POST
-annotated methods (resources) in JAX-RS (RESTful APIs)
It's easy to forget that if such code mutates some state (e. g. fields in the class), it must beproperly synchronized or access only concurrent collections and classes.
# IS.5.Calls toparse()
andformat()
on a shared instance ofDateFormat
aresynchronized, e. g. if aDateFormat
is stored in a static field? Althoughparse()
andformat()
may look "read-only", they actually mutate the receivingDateFormat
object.
An inspection "Non thread-safe static field access" in IntelliJ IDEA helps to catch such concurrencybugs.
# ETS.1. An example of excessive thread safety is a class where every modifiablefield isvolatile
or anAtomicReference
or other atomic, and every collection field stores aconcurrent collection (e. g.ConcurrentHashMap
), although all accesses to those fields aresynchronized
.
There shouldn’t be any "extra" thread safety in code, there should be just enough of it.Duplication of thread safety confuses readers because they might think the extra thread safetyprecautions are (or used to be) needed for something but will fail to find the purpose.
The exception from this principle is thevolatile
modifier on the lazily initialized field in thesafe local double-checked locking patternwhich is the recommended way to implement double-checked locking, despite thatvolatile
isexcessive for correctnesswhen the lazily initialized object has allfinal
fields*. Without thatvolatile
modifier the thread safety of the double-checked locking could easily be broken by achange (addition of a non-final field) in the class of lazily initialized objects, though that classshould not be aware of subtle concurrency implications. If the class of lazily initialized objectsisspecified to be immutable (seeDc.3) thevolatile
is stillunnecessary and theUnsafeLocalDCLpattern could be used safely, but the fact that some class has allfinal
fields doesn’tnecessarily mean that it’s immutable.
See alsothe section about double-checked locking.
# ETS.2. Aren’t thereAtomicReference
,AtomicBoolean
,AtomicInteger
orAtomicLong
fields on which onlyget()
andset()
methods are called? Simple fields withvolatile
modifiers can be used instead, butvolatile
might not be needed too; seeDc.9.
# ETS.3.Does a class (method) need to be thread-safe? May a classbe accessed (method called) concurrently from multiple threads (withouthappens-beforerelationships between the accesses or calls)? Can a class (method) be simplified by making itnon-thread-safe?
See alsoFt.1 about unneeded wrapping of a computation into aFuture
andDc.9 about potentially unneededvolatile
modifiers.
This item is a close relative ofDn.1 (about rationalizing concurrency and threadsafety in the patch description) andDc.1 (about justifying concurrency inJavadocs for classes and methods, and documenting concurrent access). If these actions are done, itshould be self-evident whether the class (method) needs to be thread-safe or not. There may becases, however, when it might be desirable to make the class (method) thread-safe although it's notsupposed to be accessed or called concurrently as of the moment of the patch. For example, threadsafety may be needed to ensure memory safety (seeCN.4 about this).Anticipating some changes to the codebase that make the class (method) being accessed from multiplethreads may be another reason to make the class (method) thread-safe up front.
# ETS.4.Does aReentrantLock
(orReentrantReadWriteLock
,Semaphore
)need to be fair? To justify the throughput penalty of making a lock fair it should be demonstratedthat a lack of fairness leads to unacceptably long starvation periods in some threads trying toacquire the lock or pass the semaphore. This should be documented in the Javadoc comment for thefield holding the lock or the semaphore. See [JCIP 13.3] for more details.
# RC.1. Aren’tConcurrentMap
(or Cache) objects updated with separatecontainsKey()
,get()
,put()
andremove()
calls instead of a single call tocompute()
/computeIfAbsent()
/computeIfPresent()
/replace()
?
# RC.2. Aren’t therepoint read accesses such asMap.get()
,containsKey()
orList.get()
outside of critical sections to a non-thread-safe collection such asHashMap
orArrayList
, while new entries can be added to the collection concurrently, eventhough there is a happens-before edge between the moment when some entry is put into the collectionand the moment when the same entry is point-queried outside of a critical section?
The problem is that when new entries can be added to a collection, it grows and changes its internalstructure from time to time (HashMap rehashes the hash table,ArrayList
reallocates the internalarray). At such moments races might happen and unprotected point read accesses might fail withNullPointerException
,ArrayIndexOutOfBoundsException
, or returnnull
or some random entry.
Note that this concern applies toArrayList
even when elements are only added to the end of thelist. However, a small change inArrayList
’s implementation in OpenJDK could have disallowed dataraces in such cases at very little cost. If you are subscribed to the concurrency-interest mailinglist, you could help to bring attention to this problem by revivingthis thread.
See alsoIS.3 regarding unbalanced synchronization of accesses toprimitive fields.
# RC.3. A variation of the previous item: isn’t a non-thread-safecollection such asHashMap
orArrayList
iterated outside of a critical section, while it maybe modified concurrently? This could happen by accident when anIterable
,Iterator
orStream
over a collection is returned from a method of a thread-safe class, even though the iterator orstream is created within a critical section. Note thatreturning unmodifiable collection viewslikeCollections.unmodifiableList()
from getters wrapping collection fields that may be modifiedconcurrently doesn't solve this problem. If the collection is relatively small, it should becopied entirely, or a copy-on-write collection (seeSc.3) should beused instead of a non-thread-safe collection.
Note that callingtoString()
on a collection (e. g. in a logging statement) implicitly iteratesover it.
Like the previous item, this one applies to growing ArrayLists too.
This item applies even to synchronized collections: seeRC.10 fordetails.
# RC.4. Generalization of the previous item: aren’tnon-trivialobjects that can be mutated concurrently returned from getters in a thread-safe class (and thusinevitably leaking outside of critical sections)?
# RC.5. If there are multiple variables in a thread-safe class that areupdated at once but have individual getters, isn’t there a race condition in the code that callsthose getters? If there is, the variables should be madefinal
fields in a dedicated POJO, thatserves as a snapshot of the updated state. The POJO is stored in a field of the thread-safe class,directly or as anAtomicReference
. Multiple getters to individual fields should be replaced witha single getter that returns the POJO. This allows avoiding a race condition in the client code byreading a consistent snapshot of the state at once.
This pattern is also very useful for creating safe and reasonably simple non-blocking code: seeNB.2 and [JCIP 15.3.1].
# RC.6. If some logic within some critical section depends onsome data that principally is part of the internal mutable state of the class, but was read outsideof the critical section or in a different critical section, isn’t there a race condition because thelocal copy of the data may become out of sync with the internal state by the time when thecritical section is entered? This is a typical variant of check-then-act race condition, see[JCIP 2.2.1].
An example of this race condition is callingtoArray()
on synchronized collections with a sizedarray:
list =Collections.synchronizedList(newArrayList<>());...elements =list.toArray(newElement[list.size()]);
This might unexpectedly leave theelements
array with some nulls in the end if there are someconcurrent removals from the list. Therefore,toArray()
on a synchronized collection should becalled with a zero-length array:toArray(new Element[0])
, which is also not worse from theperformance perspective: see "Arrays of Wisdom of theAncients".
See alsoRC.9 about cache invalidation races which are similar tocheck-then-act races.
# RC.7. Aren't thererace conditions between the code (i. e. programruntime actions) and some actions in the outside world or actions performed by some other programsrunning on the machine? For example, if some configurations or credentials are hot reloaded fromsome file or external registry, reading separate configuration parameters or separate credentials(such as username and password) in separate transactions with the file or the registry may be racingwith a system operator updating those configurations or credentials.
Another example is checking that a file exists (or not exists) and then reading, deleting, orcreating it, respectively, while another program or a user may delete or create the file between thecheck and the act. It's not always possible to cope with such race conditions, but it's useful tokeep such possibilities in mind. Prefer static methods fromjava.nio.file.Files
class andNIO file reading/writing API to methods from the oldjava.io.File
for file system operations.Methods fromFiles
are more sensitive to file system race conditions and tend to throw exceptionsin adverse cases, while methods onFile
swallow errors and make it hard even to detect raceconditions. Static methods fromFiles
also supportStandardOpenOption.CREATE
andCREATE_NEW
which may help to ensure some extra atomicity.
# RC.8. If you areusing Guava Cache andinvalidate(key)
, areyou not affected by therace condition which canleave aCache
with an invalid (stale) value mapped for a key? Consider usingCaffeine cache which doesn't have this problem. Caffeine is also faster andmore scalable than Guava Cache: seeSc.9.
# RC.9. Generalization of the previous item: isn't there a potentialcache invalidation race in the code? There are several ways to get into this problem:
- Using
Cache.put()
method concurrently withinvalidate()
. UnlikeRC.8, this is a race regardless of what caching library is used,not necessarily Guava. This is also similar toRC.1. - Having
put()
andinvalidate()
methods exposed in the own Cache interface. This places theburden of synchronizingput()
(together the preceding "checking" code, such asget()
) andinvalidate()
calls on the users of the API which really should be the job of the Cacheimplementation. - There is somelazily initialized state in a mutable object which can be invalidatedupon mutation of the object, and can also be accessed concurrently with the mutation. This meansthe class is in the category ofnon-blocking concurrency: see the correspondingchecklist items. A way to avoid cache invalidation race in this case is to wrap the primary stateand the cached state into a POJO and replace it atomically, as described inNB.2.
# RC.10.The whole iteration loop over a synchronized collection(i. e. obtained from one of theCollections.synchronizedXxx()
static factory methods), or aStream pipeline using a synchronized collection as a source is protected bysynchronized (coll)
?Seethe Javadocfor examples and details.
This also applies to passing synchronized collections into:
- Copy constructors of other collections, e. g.
new ArrayList<>(synchronizedColl)
- Static factory methods of other collections, e. g.
List.copyOf()
,Set.copyOf()
,ImmutableMap.copyOf()
- Bulk methods on other collections:
otherColl.containsAll(synchronizedColl)
otherColl.addAll(synchronizedColl)
otherColl.removeAll(synchronizedColl)
otherMap.putAll(synchronizedMap)
otherColl.containsAll(synchronizedMap.keySet())
- Etc.
Because in all these cases there is an implicit iteration on the source collection.
See alsoRC.3 about unprotected iteration over non-thread-safecollections.
# T.1.Was it considered to add multi-threaded unit tests for athread-safe class or method? Single-threaded tests don't really test the thread safety andconcurrency. Note that this question doesn't mean to indicate that theremust be concurrent unittests for every piece of concurrent code in the project because correct concurrent tests take a lotof effort to write and therefore they might often have low ROI.
What is the worst thing that might happen if this code has a concurrency bug? This is a usefulquestion to inform the decision about writing concurrent tests. The consequences may range from atiny, entirely undetectable memory leak, to storing corrupted data in a durable database ora security breach.
# T.2.Isn't a sharedjava.util.Random
object used for datageneration in a concurrency test?java.util.Random
is synchronized internally, so if multipletest threads (which are conceived to access the tested class concurrently) access the samejava.util.Random
object then the test might degenerate to a mostly synchronous one and fail toexercise the concurrency properties of the tested class. See [JCIP 12.1.3].Math.random()
is asubject for this problem too because internallyMath.random()
uses a globally sharedjava.util.Random
instance. UseThreadLocalRandom
instead.
# T.3. Doconcurrent test workers coordinate their start using a latchsuch asCountDownLatch
? If they don't much or even all of the test work might be done by thefirst few started workers. See [JCIP 12.1.3] for more information.
# T.4. Are theremore test threads than there are availableprocessors if possible for the test? This will help to generate more thread schedulinginterleavings and thus test the logic for the absence of race conditions more thoroughly. See[JCIP 12.1.6] for more information. The number of available processors on the machine can beobtained asRuntime.getRuntime().availableProcessors()
.
# T.5. Are thereno regular assertions in code that is executed not in themain thread running the unit test? Consider the following example:
@TestpublicvoidtestServiceListener() {// Missed assertion -- Don't do this!service.addListener(event ->Assert.assertEquals(Event.Type.MESSAGE_RECEIVED,event.getType()));service.sendMessage("test");}
Assuming theservice
executes the code of listeners asynchronously in some internally-managed ora shared thread pool, even if the assertion within the listener's lambda fails, JUnit and TestNGwill think the test has passed.
The solution to this problem is either to pass the data (or thrown exceptions) from a concurrentthread back to the main test thread and verify it in the end of the test, or to useConcurrentUnit library which takes overthe boilerplate associated with the first approach.
# T.6.Is the result ofCountDownLatch.await()
method calls checked? The most frequent form of this mistake is forgetting to wrapCountDownLatch.await()
intoassertTrue()
in tests, which makes the test to not actually verifythat the production code works correctly. The absence of a check in production code might causerace conditions.
Apart fromCountDownLatch.await
, the other similar methods whose result must be checked are:
Lock.tryLock()
andtryAcquire()
methods onSemaphore
andRateLimiter
from GuavaMonitor.enter(...)
in GuavaCondition.await(...)
awaitTermination()
andawaitQuiescence()
methods. There is aseparateitem about them.Process.waitFor(...)
It's possible to find these problems using static analysis, e. g. by configuring the "Result ofmethod call ignored" inspection in IntelliJ IDEA to recognizeLock.tryLock()
,CountDownLatch.await()
and other methods listed above. They arenot in the default set ofchecked methods, so they should be added manually in the inspection configuration.
# Lk.1. Is it possible to use concurrent collections and/or utilities fromjava.util.concurrent.*
andavoid using locks withObject.wait()
/notify()
/notifyAll()
?Code redesigned around concurrent collections and utilities is often both clearer and lesserror-prone than code implementing the equivalent logic with intrinsic locks,Object.wait()
andnotify()
(Lock
objects withawait()
andsignal()
are not different in this regard). See[EJ Item 81] for more information.
# Lk.2. Is it possible tosimplify code that uses intrinsic locks orLock
objects with conditional waits by using Guava’sMonitor
instead?
# Lk.3.Isn'tReentrantLock
used whensynchronized
would suffice?ReentrantLock
shoulndn't be used in the situations when none of its distinctive features(tryLock()
, timed and interruptible locking methods, etc.) are used. Note that reentrancy isnotsuch a feature: intrinsic Java locks support reentrancy too. The ability for aReentrantLock
to befair is seldom such a feature: seeETS.4. See [JCIP 13.4] for more informationabout this.
This advice also applies when a class uses a private lock object (instead ofsynchronized (this)
or synchronized methods) to protect against accidental or malicious interference by the clientsacquiring synchronizing on the object of the class: see [JCIP 4.2.1], [EJ Item 82],LCK00-J.
# Lk.4.Locking (lock()
,lockInterruptibly()
,tryLock()
) andunlock()
methods are used strictly with the recommendedtry-finally idiomwithout deviations?
lock()
(orlockInterruptibly()
) call goesbefore thetry {}
block rather than within it?- There are no statements between the
lock()
(orlockInterruptibly()
) call and the beginning ofthetry {}
block? unlock()
call is the first statement within thefinally {}
block?
This advice doesn't apply when locking methods andunlock()
should occur in different scopes, i.e. not within the recommended try-finally idiom altogether. The containing methods could beannotated with Error Prone's@LockMethod
and@UnlockMethod
annotations.
There is a "Lock acquired but not safely unlocked" inspection in IntelliJ IDEA which corresponds tothis item.
See alsoLCK08-J.
# Dl.1. If a thread-safe class is implemented so that there arenested critical sections protected by different locks,is it possible to redesign the code to getrid of nested critical sections? Sometimes a class could be split into several distinct classes,or some work that is done within a single thread could be split between several threads or taskswhich communicate via concurrent queues. See [JCIP 5.3] for more information about theproducer-consumer pattern.
There is an inspection "Nested 'synchronized' statement" in IntelliJ IDEA corresponding to thisitem.
# Dl.2. If restructuring a thread-safe class to avoid nested criticalsections is not reasonable, was it deliberately checked that the locks are acquired in the sameorder throughout the code of the class?Is the locking order documented in the Javadoc commentsfor the fields where the lock objects are stored?
SeeLCK07-Jfor examples.
# Dl.3. If there are nested critical sections protected by several(potentially different)dynamically determined locks (for example, associated with some businesslogic entities), are the locks ordered before the acquisition? See [JCIP 10.1.2] for moreinformation.
# Dl.4. Aren’t therecalls to some callbacks (listeners, etc.) that can beconfigured through public API or extension interface calls within critical sections? With suchcalls, the system might be inherently prone to deadlocks because the external logic executed withina critical section may be unaware of the locking considerations and call back into the logic of thesystem, where some more locks may be acquired, potentially forming a locking cycle that might leadto a deadlock. Also, the external logic could just perform some time-consuming operation and bythat harm the efficiency of the system (seeSc.1). See [JCIP 10.1.3]and [EJ Item 79] for more information.
When public API or extension interface calls happen within lambdas passed intoMap.compute()
,computeIfAbsent()
,computeIfPresent()
, andmerge()
, there is a risk of not only deadlocks (seethe next item) but also race conditions which could result in a corrupted map (if it's not aConcurrentHashMap
, e. g. a simpleHashMap
) or runtime exceptions.
Beware that aCompletableFuture
or aListenableFuture
returned from a public API opens a door for performing some user-defined callbacks from the placewhere the future is completed, deep inside the library (framework). If the future is completedwithin a critical section, or from anExecutorService
whose threads must not block, such as aForkJoinPool
(seeTE.4) or the worker pool of an I/O library, consider eitherreturning a simpleFuture
, or documenting that stages shouldn't be attached to thisCompletableFuture
in thedefault execution mode. SeeTE.7 for moreinformation.
# Dl.5. Aren't therecalls to methods on aConcurrentHashMap
instancewithin lambdas passed intocompute()
-like methods called on the same map? For example, thefollowing code is deadlock-prone:
map.compute(key, (Stringk,Integerv) -> {if (v ==null ||v ==0) {returnmap.get(DEFAULT_KEY); }returnv;});
Note that nested calls to non-lambda accepting methods,including read-only access methods likeget()
create the possibility of deadlocks as well as nested calls tocompute()
-like methodsbecause the former are not always lock-free.
# Sc.1.Are critical sections as small as possible? For everycritical section: can’t some statements in the beginning and the end of the section be moved out ofit? Not only minimizing critical sections improves scalability, but also makes it easier to reviewthem and spot race conditions and deadlocks.
This advice equally applies to lambdas passed intoConcurrentHashMap
’scompute()
-like methods.
See also [JCIP 11.4.1] and [EJ Item 79].
# Sc.2. Is it possible toincrease locking granularity? If athread-safe class encapsulates accesses to map, is it possible toturn critical sections intolambdas passed intoConcurrentHashMap.compute()
orcomputeIfAbsent()
orcomputeIfPresent()
methods to enjoy effective per-key locking granularity? Otherwise, is it possible to useGuava’sStriped
or an equivalent? See[JCIP 11.4.3] for more information about lock striping.
# Sc.3. Is it possible touse non-blocking collections instead ofblocking ones? Here are some possible replacements within JDK:
Collections.synchronizedMap(HashMap)
,Hashtable
→ConcurrentHashMap
Collections.synchronizedSet(HashSet)
→ConcurrentHashMap.newKeySet()
Collections.synchronizedMap(TreeMap)
→ConcurrentSkipListMap
. By the way,ConcurrentSkipListMap
is not the state of the art concurrent sorted dictionary implementation.SnapTree ismore efficient thanConcurrentSkipListMap
and there havebeen some research papers presenting algorithms that are claimed to be more efficient thanSnapTree.Collections.synchronizedSet(TreeSet)
→ConcurrentSkipListSet
Collections.synchronizedList(ArrayList)
,Vector
→CopyOnWriteArrayList
LinkedBlockingQueue
→ConcurrentLinkedQueue
LinkedBlockingDeque
→ConcurrentLinkedDeque
Consider also using queues from JCTools instead of concurrent queues from the JDK: seeSc.8.
See also an item about usingForkJoinPool
instead ofnewFixedThreadPool(N)
for high-traffic executor services, which internally amounts to replacing a single blocking queue oftasks insideThreadPoolExecutor
with multiple non-blocking queues insideForkJoinPool
.
# Sc.4. Is it possible touseClassValue
instead ofConcurrentHashMap<Class, ...>
? Note, however, that unlikeConcurrentHashMap
with itscomputeIfAbsent()
methodClassValue
doesn’t guarantee that per-class value is computed onlyonce, i. e.ClassValue.computeValue()
might be executed by multiple concurrent threads. So if thecomputation insidecomputeValue()
is not thread-safe, it should be synchronized separately. On theother hand,ClassValue
does guarantee that the same value is always returned fromClassValue.get()
(unlessremove()
is called).
# Sc.5. Was it considered toreplace a simple lock with aReadWriteLock
?Beware, however, that it’s more expensive to acquire and release aReentrantReadWriteLock
than asimple intrinsic lock, so the increase in scalability comes at the cost of reduced throughput. Ifthe operations to be performed under a lock are short, or if a lock is already striped (seeSc.2) and therefore very lightly contended,replacing a simplelock with aReadWriteLock
might have a net negative effect on the application performance. Seethis commentfor more details.
# Sc.6. Is it possible to use aStampedLock
instead of aReentrantReadWriteLock
when reentrancy is not needed?
# Sc.7. Is it possible to useLongAdder
for "hot fields" (see [JCIP 11.4.4]) instead ofAtomicLong
orAtomicInteger
on which onlymethods likeincrementAndGet()
,decrementAndGet()
,addAndGet()
and (rarely)get()
is called,but notset()
andcompareAndSet()
?
Note that a field should be really updated from several concurrent threads steadily to justify usingLongAdder
. If the field is usually updated only from one thread at a time (there may be severalupdating threads, but each of them accesses the field infrequently, so the updates from differentthreads rarely happen at the same time) it's still better to useAtomicLong
orAtomicInteger
because they take less memory thanLongAdder
and their updates are cheaper.
# Sc.8. Was it considered touse one of the array-based queues fromthe JCToolslibrary instead ofArrayBlockingQueue
?Those queues from JCTools are classified as blocking, but they avoid lock acquisition in many casesand are generally much faster thanArrayBlockingQueue
.
See alsoSc.3 regarding replacing blocking queues (and othercollections) with non-blocking equivalents within JDK.
# Sc.9. Was it considered touseCaffeinecache instead of other Cache implementations (such from Guava)?Caffeine's performance is very good compared to other cachinglibraries.
Another reason to use Caffeine instead of Guava Cache is that it avoids an invalidation race: seeRC.8.
# Sc.10. When some state or a condition is checked, or resource is allocated withina critical section, and the result is usually expected to be positive (access granted,resource allocated) because the entity is rarely in a protected, restricted, or otherwise specialstate, or because the underlying resource is rarely in shortage, is it possible toapplyspeculation (optimistic concurrency) to improve scalability? This means to use lighter-weightsynchronization (e. g. shared locking with aReadWriteLock
or aStampedLock
instead of exclusivelocking) sufficient just to detect a shortage of the resource or that the entity is in a specialstate, and fallback to heavier synchronization only when necessary.
This principle is used internally in many scalable concurrent data structures, includingConcurrentHashMap
and JCTools's queues, but could be applied on the higher logical level as well.
See also the article aboutOptimistic concurrency control on Wikipedia.
# Sc.11. Was it considered touse aForkJoinPool
instead of aThreadPoolExecutor
with N threads (e. g. returned from one ofExecutors.newFixedThreadPool()
methods), for thread pools on which a lot of small tasks are executed?ForkJoinPool
is morescalable because internally it maintains one queue per each worker thread, whereasThreadPoolExecutor
has a single, blocking task queue shared among all threads.
ForkJoinPool
implementsExecutorService
as well asThreadPoolExecutor
, so could often be adrop in replacement. For caveats and details, seethis andthis messages by Doug Lea.
See also items aboutusing non-blocking collections (including queues) instead of blockingones and aboutusing JCTools queues.
Regarding all items in this section, see also [EJ Item 83] and "Safe Publication this and SafeInitialization in Java".
# LI.1. For every lazily initialized field:is the initialization codethread-safe and might it be called from multiple threads concurrently? If the answers are "no" and"yes", either double-checked locking should be used or the initialization should be eager.
Be especially wary of using lazy-initialization in mutable objects. They are prone to cacheinvalidation race conditions: seeRC.9.
# LI.2. If a field is initialized lazily under a simple lock, is it possible to usedouble-checked locking instead to improve performance?
# LI.3. Does double-checked locking follow theSafeLocalDCLpattern, as noted inETS.1?
If the initialized objects are immutable a more efficientUnsafeLocalDCLpattern might also be used. However, if the lazily-initialized field is notvolatile
and there areaccesses to the field that bypass the initialization path, the value of thefield must becarefully cached in a local variable. For example, the following code is buggy:
privateMyImmutableClasslazilyInitializedField;voiddoSomething() { ...if (lazilyInitializedField !=null) {// (1)lazilyInitializedField.doSomethingElse();// (2) - Can throw NPE! }}
This code might result in aNullPointerException
, because although a non-null value is observedwhen the field is read the first time at line 1, the second read at line 2 could observe null.
The above code could be fixed as follows:
voiddoSomething() {MyImmutableClasslazilyInitialized =this.lazilyInitializedField;if (lazilyInitialized !=null) {// Calling doSomethingElse() on a local variable to avoid NPE:// see https://github.com/code-review-checklists/java-concurrency#safe-local-dcllazilyInitialized.doSomethingElse(); }}
See "Wishful Thinking: Happens-Before Is The Actual Ordering" and"Date-Race-Ful Lazy Initialization for Performance" for more information.
# LI.4. In each particular case, doesn’t thenet impact of double-checked lockingand lazy field initialization on performance and complexity overweight the benefits of lazyinitialization? Isn’t it ultimately better to initialize the field eagerly?
# LI.5. If a field is initialized lazily under a simple lock or usingdouble-checked locking, does it really need locking? If nothing bad may happen if two threads do theinitialization at the same time and use different copies of the initialized state then a benign racecould be allowed. The initialized field should still bevolatile
(unless the initialized objectsare immutable) to ensure there is a happens-before edge between threads doing the initialization andreading the field. This is calleda single-check idiom (ora racy single-check idiom if thefield doesn't have avolatile
modifier) in [EJ Item 83].
Annotate such fields with@LazyInit
fromerror_prone_annotations
. The placein code with the race should also be identified with WARNING comments: seeNB.3.
# LI.6. Islazy initialization holder class idiom used for static fields whichmust be lazy rather than double-checked locking? There are no reasons to use double-checkedlocking for static fields because lazy initialization holder class idiom is simpler, harder to makemistake in, and is at least as efficient as double-checked locking (see benchmark results in "SafePublication and Safe Initialization inJava").
# NB.1. If there is some non-blocking or semi-symmetrically blockingcode that mutates the state of a thread-safe class, was it deliberately checked that if athreadon a non-blocking mutation path is preempted after each statement, the object is still in a validstate? Are there enough comments, perhaps before almost every statement where the state ischanged, to make it relatively easy for readers of the code to repeat and verify the check?
# NB.2. Is it possible to simplify some non-blocking code byconfiningall mutable state in an immutable POJO and update it via compare-and-swap operations? This patternis also mentioned inRC.5. Instead of a POJO, a singlelong
value could beused if all parts of the state are integers that can together fit 64 bits. See also [JCIP 15.3.1].
# NB.3. Are therevisible WARNING comments identifying the boundaries ofnon-blocking code? The comments should mark the start and the end of non-blocking code, partiallyblocking code, and benignly racy code (seeDc.8 andLI.5). The opening comments should:
- Justify the need for such error-prone code (which is a special case ofDc.1).
- Warn developers that changes in the following code should be made (and reviewed) extremelycarefully.
# NB.4. If some condition is awaited in a (busy) loop, like in the followingexample:
volatilebooleancondition;// in some method:while (!condition) {// Or Thread.sleep/yield/onSpinWait, or no statement, i. e. a pure spin waitTimeUnit.SECONDS.sleep(1L); }// ... do something when condition is true
Is it explained in a comment why busy waiting is needed in the specific case, and why thecosts and potential problems associated with busy waiting (see [JCIP 12.4.2] and [JCIP 14.1.1])either don't apply in the specific case or are outweighed by the benefits?
If there is no good reason for spin waiting, it's preferable to synchronize explicitly using a toolsuch asSemaphore
,CountDownLatch
, orExchanger
,or if the logic for which the spin loop awaits is executed in someExecutorService
, it's better toadd a callback to the correspondingCompletableFuture
orListenableFuture
(checkTE.7about doing this properly).
In test code waiting for some condition, a library such asAwaitility could be used instead of explicit looping withThread.sleep
calls.
SinceThread.yield()
andThread.onSpinWait()
are rarely, if ever, useful outside of spin loops,this item could also be interpreted as that there should be a comment to every call to either ofthese methods, explaining either why they are called outside of a spin loop, or justifying the spinloop itself.
In any case, the field checked in the busy loop must bevolatile
: seeIS.2 for details.
The busy wait pattern is covered by IntelliJ IDEA's inspections "Busy wait" and "while loop spins onfield".
# TE.1.Are Threads given names when created? Are ExecutorServices created withthread factories that name threads?
It appears that different projects have different policies regarding other aspects ofThread
creation: whether to make them daemon withsetDaemon()
, whether to set thread priorities andwhether aThreadGroup
should be specified. Many of such rules can be effectively enforced withforbidden-apis.
# TE.2. Aren’t there threads created and started, but not stored in fields, a-lanew Thread(...).start()
, in some methods that may be called repeatedly? Is it possible todelegate the work to a cached or a sharedExecutorService
instead?
Another form of this problem is when aThread
(or anExecutorService
) is created and managedwithin objects (in other words,active objects) thatare relatively short-lived. Is it possible to reuse executors by creating them one level up thestack and passing shared executors to constructors of the short-lived objects, or a sharedExecutorService
stored in a static field?
# TE.3.Aren’t some network I/O operations performed in anExecutors.newCachedThreadPool()
-createdExecutorService
? If a machine that runs theapplication has network problems or the network bandwidth is exhausted due to increased load,CachedThreadPools that perform network I/O might begin to create new threads uncontrollably.
Note that completing someCompletableFuture
orSettableFuture
from inside a cached thread pooland then returning this future to a user might expose the thread pool to executing unwanted actionsif the future is used improperly: seeTE.7 for details.
# TE.4.Aren’t there blocking or I/O operations performed in tasks scheduledto aForkJoinPool
(except those performed via amanagedBlock()
call)? ParallelStream
operations are executed in the commonForkJoinPool
implicitly, as wellas the lambdas passed intoCompletableFuture
’s methods whose names end with "Async" but notaccepting a custom executor.
Note that attaching blocking or I/O operations to aCompletableFuture
as stage indefaultexecution mode (via methods likethenAccept()
,thenApply()
,handle()
, etc.) might alsoinadvertently lead to performing them inForkJoinPool
from where the future may be completed: seeTE.7.
Thread.sleep()
is a blocking operation.
This advice should not be taken too far: occasional transient IO (such as that may happen duringlogging) and operations that may rarely block (such asConcurrentHashMap.put()
calls) usuallyshouldn’t disqualify all their callers from execution in aForkJoinPool
or in a parallelStream
.SeeParallel Stream Guidance for themore detailed discussion of these tradeoffs.
See alsothe section about parallel Streams.
# TE.5. An opposite of the previous item:can non-blocking computations beparallelized or executed asynchronously by submitting tasks toForkJoinPool.commonPool()
or viaparallel Streams instead of using a custom thread pool (e. g. created by one of the static factorymethods fromExecutorServices
)? Unless the custom thread pool is configured with aThreadFactory
that specifies a non-default priority for threads or a custom exception handler (seeTE.1) there is little reason to create more threads in the system instead ofreusing threads of the commonForkJoinPool
.
# TE.6. Is everyExecutorService
treated as a resource and is shut downexplicitly in theclose()
method of the containing object, or in a try-with-resources of atry-finally statement? Failure to shutdown anExecutorService
might lead to a thread leak even ifanExecutorService
object is no longer accessible, because some implementations (such asThreadPoolExecutor
) shutdown themselves in a finalizer,whilefinalize()
is not guaranteed toever be called bythe JVM.
To make explicit shutdown possible, first,ExecutorService
objects must not be assinged intovariables and fields ofExecutor
type.
# TE.7. Arenon-async stages attached to aCompletableFuture
simple andnon-blocking unless the future iscompleted from a thread in the same thread pool as the thread from where aCompletionStage
is attached?This also applies when asynchronous callback is attached using Guava'sListenableFuture.addListener()
orFutures.addCallback()
methods anddirectExecutor()
(or an equivalent) is provided as the executor for the callback.
Non-async execution is calleddefault execution (ordefault mode) in the documentation forCompletionStage
:these are methodsthenApply()
,thenAccept()
,thenRun()
,handle()
, etc. Such stages may beexecutedeither from the thread adding a stage, or from the thread callingfuture.complete()
.
If theCompletableFuture
originates from a library, it's usually unknown in which thread(executor) it is completed, therefore, chaining a heavyweight, blocking, or I/O operation to such afuture might lead to problems described inTE.3,TE.4 (if the future is completed from aForkJoinPool
or an event loop executorin an asynchronous library), andDl.4.
Even if theCompletableFuture
is created in the same codebase as stages attached to it, but iscompleted in a different thread pool, default stage execution mode leads to non-deterministicscheduling of operations. If these operations are heavyweight and/or blocking, this reduces thesystem's operational predictability and robustness, and also creates a subtle dependency between thecomponents: e. g. if the component which completes the future decides to migrate this action to aForkJoinPool
, it could suddenly lead to the problems described in the previous paragraph.
A lightweight, non-blocking operation which is OK to attach to aCompletableFuture
as a non-asyncstage may be something like incrementing anAtomicInteger
counter, adding an element to anon-blocking queue, putting a value into aConcurentHashMap
, or a logging statement.
It's also fine to attach a stage in the default mode if preceded withif (future.isDone())
checkwhich guarantees that the completion stage will be executed immediately in the current thread(assuming the current thread belongs to the proper thread pool to perform the stage action).
The specific reason(s) making non-async stage or callback attachment permissible (future completionand stage attachment happening in the same thread pool; simple/non-blocking callback; orfuture.isDone()
check) should be identified in a comment.
See also the Javadoc forListenableFuture.addListener()
describing this problem.
# TE.8. Is it possible toexecute a task or an action with a delay via aScheduledExecutorService
rather than by callingThread.sleep()
before performing the work or submitting the task toan executor?ScheduledExecutorService
allows to execute many such tasks on a small number ofthreads, while the approach withThread.sleep
requires a dedicated thread for every delayedaction. Sleeping in the context of an unknown executor (if there is insufficientconcurrent accessdocumentation for the method, as perDc.1, or if this is aconcurrency-agnostic library method) before submitting the task to an executor is also bad: thecontext executor may not be well-suited for blocking calls such asThread.sleep
, seeTE.4 for details.
This item equally applies to scheduling one-shot and recurrent delayed actions, there are methodsfor both scenarios inScheduledExecutorService
.
Be cautious, however, about scheduling tasks with affinity to system time or UTC time (e. g.beginning of each hour) usingScheduledThreadPoolExecutor
: it can experienceunbounded clockdrift.
# TE.9.The result ofExecutorService.awaitTermination()
methodcalls is checked? CallingawaitTermination()
(orForkJoinPool.awaitQuiescence()
) and notchecking the result makes little sense. If it's actually important to await termination, e. g. toensure a happens-before relation between the completion of the actions scheduled to theExecutorService
and some actions following theawaitTermination()
call, or because terminationmeans a release of some heavy resource and if the resource is not released there is a noticeableleak, then it is reasonable to at least check the result ofawaitTermination()
and log a warningif the result is negative, making debugging potential problems in the future easier. Otherwise, ifawaiting termination really makes no difference, then it's better to not callawaitTermination()
at all.
Apart fromExecutorService
, this item also applies toawaitTermination()
methods onAsynchronousChannelGroup
andio.grpc.ManagedChannel
fromgRPC-Java.
It's possible to find omitted checks forawaitTermination()
results usingStructural searchinspection inIntelliJ IDEA with the following pattern:
$x$.awaitTermination($y$, $z$);
See also a similar item aboutnot checking the result ofCountDownLatch.await()
.
# TE.10.Isn'tExecutorService
assigned into a variable or afield ofExecutor
type? This makes it impossible to follow the practice ofexplicit shutdown ofExecutorService
objects.
In IntelliJ IDEA, it's possible to find violations of this practice automatically using two patternsadded toStructural Search inspection:
- "Java" pattern:
$x$ = $y$
, where the "Type" of$x$
isExecutor
("within type hierarchy"flag is off) and the "Type" of$y$
isExecutorService
("within type hierarchy" flag is on). - "Java - Class Member" pattern:
$Type$ $x$ = $y$;
, where the "Text" of$Type$
isExecutor
and the "Type" of$y$
isExecutorService
(within type hierarchy).
# TE.11.Isn'tScheduledExecutorService
assigned intoa variable or a field ofExecutorService
type? This is wasteful because the primary Javaimplementation ofScheduledExecutorService
,ScheduledThreadPoolExecutor
(it is returned fromExecutors.newScheduledThreadPool()
andnewSingleThreadScheduledExecutor()
static factory methods) uses aPriorityQueue
internally to manage tasks which incurs higher memoryfootprint and CPU overhead compared to non-scheduledExecutorService
implementations such asvanillaThreadPoolExecutor
orForkJoinPool
.
This problem could be caught statically in a way similar to what is described in thepreviousitem.
# PS.1. For every use of parallel Streams viaCollection.parallelStream()
orStream.parallel()
:is it explained why parallelStream
isused in a comment preceding the stream operation? Are there back-of-the-envelope calculations orreferences to benchmarks showing that the total CPU time cost of the parallelized computationexceeds100 microseconds?
Is there a note in the comment that parallelized operations are generally I/O-free and non-blocking,as perTE.4? The latter might be obvious momentary, but as codebase evolves thelogic that is called from the parallel stream operation might become blocking accidentally. Withoutcomment, it’s harder to notice the discrepancy and the fact that the computation is no longer a goodfit for parallel Streams. It can be fixed by calling the non-blocking version of the logic again orby using a simple sequentialStream
instead of a parallelStream
.
# Ft.1. Does a method returning aFuture
do some blocking operationasynchronously? If it doesn't,was it considered to perform non-blocking computation logic andreturn the result directly from the method, rather than within aFuture
? There are situationswhen someone might still want to return aFuture
wrapping some non-blocking computation,essentially relieving the users from writing boilerplate code likeCompletableFuture.supplyAsync(obj::expensiveComputation)
if all of them want to run the methodasynchronously. But if at least some of the clients don't need the indirection, it's better not towrap the logic into aFuture
prematurely and give the users of the API a choice to do thisthemselves.
See alsoETS.3 about unneeded thread-safety of a method.
# Ft.2. Aren't thereblocking operations in a method returning aFuture
before asynchronous execution is started, and is it started at all? Here is theantipattern:
// DON'T DO THISFuture<Salary>getSalary(Employeeemployee)throwsConnectionException {Branchbranch =retrieveBranch(employee);// A database or an RPC callreturnCompletableFuture.supplyAsync(() -> {returnretrieveSalary(branch,employee);// Another database or an RPC call },someBlockingIoExecutor());}
Blocking the caller thread is unexpected for a user seeing a method returning aFuture
.
An example completely without asynchrony:
// DON'T DO THISFuture<Salary>getSalary(Employeeemployee)throwsConnectionException {SalaryDTOsalaryDto =retrieveSalary(employee);// A database or an RPC callSalarysalary =toSalary(salaryDto);returncompletedFuture(salary);// Or Guava's immediateFuture(), Scala's successful()}
If theretrieveSalary()
method is not blocking itself, thegetSalary()
may not need to returnaFuture
.
Another problem with making blocking calls before scheduling anFuture
is that the resulting codehasmultiple failure paths: either the future may completeexceptionally, or the method itself may throw an exception (typically from the blocking operation),which is illustrated bygetSalary() throws ConnectionException
in the above examples.
This advice also applies when a method returns any object representing an asynchronous executionother thanFuture
, such asDeferred
,Flow.Publisher
,org.reactivestreams.Publisher
,or RxJava'sObservable
.
# Ft.3. If a method returns aFuture
and some logic in thebeginning of it may lead to anexpected failure (i. e. not a result of a programming bug),wasit considered to propagate an expected failure by aFuture
completed exceptionally, rather thanthrowing from the method? For example, the following method:
Future<Response>makeQuery(Stringquery)throwsInvalidQueryException {Requestreq =compile(query);// Can throw an InvalidQueryExceptionreturnCompletableFuture.supplyAsync(() ->service.remoteCall(req),someBlockingIoExecutor());}
May be converted into:
Future<Response>makeQuery(Stringquery) {try {Requestreq =compile(query); }catch (InvalidQueryExceptione) {// Explicit catch preserves the semantics of the original version of makeQuery() most closely.// If compile(query) is an expensive computation, it may be undesirable to schedule it to// someBlockingIoExecutor() by simply moving compile(query) into the lambda below because if// this pool has more threads than CPUs then too many compilations might be started in parallel,// leading to excessive switches between threads running CPU-bound tasks.// Another alternative is scheduling compile(query) to the common FJP:// CompletableFuture.supplyAsync(() -> compile(query))// .thenApplyAsync(service::remoteCall, someBlockingIoExecutor());CompletableFuture<Response>f =newCompletableFuture<>();f.completeExceptionally(e);returnf;// Or use Guava's immediateFailedFuture() }returnCompletableFuture.supplyAsync(() ->service.remoteCall(req),someBlockingIoExecutor());}
The point of this refactoring is unification of failure paths, so that the users of the API don'thave to deal with multiple different ways of handling errors from the method.
Similarly tothe previous item, this consideration also applies whena method returns any object representing an asynchronous execution other thanFuture
, such asDeferred
,Publisher
, orObservable
.
# IF.1. If some code propagatesInterruptedException
wrapped into anotherexception (e. g.RuntimeException
), isthe interruption status of the current thread restoredbefore the wrapping exception is thrown?
PropagatingInterruptedException
wrapped into another exception is a controversial practice(especially in libraries) and it may be prohibited in some projects completely, or in specificsubsystems.
# IF.2. If some methodreturns normally after catching anInterruptedException
, is this coherent with the (documented) semantics of the method? Returningnormally after catching anInterruptedException
usually makes sense only in two types of methods:
Runnable.run()
orCallable.call()
themselves, or methods that are intended to be submitted astasks to some Executors as method references.Thread.currentThread().interrupt()
should still becalled before returning from the method, assuming that the interruption policy of the threads intheExecutor
is unknown.- Methods with "try" or "best effort" semantics. Documentation for such methods should be clearthat they stop attempting to do something when the thread is interrupted, restore the interruptionstatus of the thread and return. For example,
log()
orsendMetric()
could probably be suchmethods, as well asboolean trySendMoney()
, but notvoid sendMoney()
.
If a method doesn’t fall into either of these categories, it should propagateInterruptedException
directly or wrapped into another exception (see the previous item), or it should not return normallyafter catching anInterruptedException
, but rather continue execution in some sort of retry loop,saving the interruption status and restoring it before returning (see anexample from JCIP). Fortunately, in most situations, it’snot needed to write such boilerplate code:one of the methods fromUninterruptibles
utility class from Guava can be used.
# IF.3. If anInterruptedException
or aTimeoutException
is caught on aFuture.get()
call and the task behind the future doesn’t have side effects, i. e.get()
iscalled only to obtain and use the result in the context of the current thread rather than achievesome side effect, is the futurecanceled?
See [JCIP 7.1] for more information about thread interruption and task cancellation.
# Tm.1. Are values returned fromSystem.nanoTime()
compared in anoverflow-aware manner, as described inthe documentation forthis method?
# Tm.2.Isn'tSystem.currentTimeMillis()
used for time comparisons,timed blocking, measuring intervals, timeouts, etc.?System.currentTimeMillis()
is a subject tothe "time going backward" phenomenon. This might happen due to a time correction on a server, forexample.
System.nanoTime()
should be used instead ofcurrentTimeMillis()
for the purposes of timecomparision, interval measurements, etc. Values returned fromnanoTime()
never decrease (but mayoverflow — see the previous item). Warning:nanoTime()
didn’t always uphold to this guarantee inOpenJDK until 8u192 (seeJDK-8184271). Make sureto use the freshest distribution.
In distributed systems, theleap second adjustmentcauses similar issues.
# Tm.3. Dovariables that store time limits and periods have suffixes identifyingtheir units, for example, "timeoutMillis" (also -Seconds, -Micros, -Nanos) rather than just"timeout"? In method and constructor parameters, an alternative is providing aTimeUnit
parameter next to a "timeout" parameter. This is the preferred option for public APIs.
# Tm.4.Do methods that have "timeout" and "delay" parameterstreat negative arguments as zeros? This is to obey the principle of least astonishment because alltimed blocking methods in classes fromjava.util.concurrent.*
follow this convention.
# Tm.5.Tasks that should happen at a certain system time, UTCtime, or wall-clock time far in the future, or run periodically with a cadence expressed in terms ofsystem/UTC/wall-clock time (rather than internal machine's CPU time) arenot scheduled withScheduledThreadPoolExecutor
?ScheduledThreadPoolExecutor
(this class is also behind allfactory methods inExecutors
which return aScheduledExecutorService
) usesSystem.nanoTime()
for timing intervals.nanoTime()
can drift against the system time and the UTC time.
CronScheduler
is a scheduling class designedto be proof against unbounded clock drift relative to UTC or system time for both one-shot orperiodic tasks. See more detailed recommendations onchoosing betweenScheduledThreadPoolExecutor
andCronScheduler
.On Android, useAndroid-specific APIs.
# Tm.6. On consumer devices (PCs, laptops, tables, phones),ScheduledThreadPoolExecutor
(orTimer
) isnot used for human interaction tasks orinteractions between the device and a remote service? Examples of human interaction tasks arealarms, notifications, timers, or task management. Examples of interactions between user's deviceand remote services are checking for new e-mails or messages, widget updates, or software updates.The reason for this is thatneitherScheduledThreadPoolExecutor
norTimer
account for machinesuspension(such as sleep or hibernation mode). On Android, useAndroid-specific APIs instead.ConsiderCronScheduler as a replacement forScheduledThreadPoolExecutor
in these cases for end-user JVM apps.
# TL.1.CanThreadLocal
field bestatic final
? There are three caseswhen aThreadLocal
cannot be static:
- Itholds some state specific to the containing instance object, rather than, for example,reusable objects to avoid allocations (which would be the same for all
ThreadLocal
-containinginstances). - A method using a
ThreadLocal
may call another method (or the same method, recursively) that alsouses thisThreadLocal
, but on a different containing object. - There is a class (or
enum
) modelling a specific type ofThreadLocal
usage, and there are onlylimited number of instances of this class in the JVM: i. e. all are constants stored instatic final
fields, orenum
constants.
If a usage ofThreadLocal
doesn't fall into either of these categories, it can bestatic final
.
There is an inspection "ThreadLocal field not declared static final" in IntelliJ IDEA whichcorresponds to this item.
StaticThreadLocal
fields could also be enforced using Checkstyle, using the following combinationof checks:
<!-- Enforce 'private static final' order of modifiers--><modulename="ModifierOrder" /><!-- Ensure all ThreadLocal fields are private--><!-- Requires https://github.com/sevntu-checkstyle/sevntu.checkstyle--><modulename="AvoidModifiersForTypesCheck"> <propertyname="forbiddenClassesRegexpProtected"value="ThreadLocal"/> <propertyname="forbiddenClassesRegexpPublic"value="ThreadLocal"/> <propertyname="forbiddenClassesRegexpPackagePrivate"value="ThreadLocal"/></module><!-- Prohibit any ThreadLocal field which is not private static final--><modulename="Regexp"> <propertyname="id"value="nonStaticThreadLocal"/> <propertyname="format"value="^\s*private\s+(ThreadLocal|static\s+ThreadLocal|final\s+ThreadLocal)"/> <propertyname="illegalPattern"value="true"/> <propertyname="message"value="Non-static final ThreadLocal"/></module>
# TL.2. Doesn't aThreadLocal
mask issues with the code, such as poorcontrol flow or data flow design? Is it possible to redesign the system without usingThreadLocal
, would that be simpler? This is especially true for instance-level (non-static)ThreadLocal
fields; see alsoTL.1 andTL.4 about them.
SeeDc.2 about the importance of articulating the control flow and the dataflow of a subsystem which may help to uncover other issues with the design.
# TL.3. Isn't aThreadLocal
used only to reuse some small heapobjects, cheap to allocate and initialize, that would otherwise need to be allocated relativelyinfrequently? In this case, the cost of accessing aThreadLocal
would likely outweigh thebenefit from reducing allocations. The evidence should be supplied that introducing aThreadLocal
shortens the GC pauses and/or increases the overall throughput of the system.
# TL.4. If the threads which execute code with usage of a non-staticThreadLocal
are long-living and there is a fixed number of them (e. g. workers of a fixed-sizedThreadPoolExecutor
) and there is a greater number of shorter-livingThreadLocal
-containingobjects, was it considered toreplace the instance-levelThreadLocal<Val>
with aConcurrentHashMap<Thread, Val> threadLocalValues
confined to the objects, accessed likethreadLocalValues.get(Thread.currentThread())
? This approach requires some confidence andknowledge about the threading model of the subsystem (seeDc.2), though itmay also be trivial if Active Object pattern is used (seeDn.2), but is muchfriendlier to GC because no short-living weak references are produced.
# CN.1. If a class manages native resources and employsjava.lang.ref.Cleaner
(orsun.misc.Cleaner
; or overridesObject.finalize()
) to ensure thatresources are freed when objects of the class are garbage collected, and the class implementsCloseable
with the same cleanup logic executed fromclose()
directly rather than throughCleanable.clean()
(orsun.misc.Cleaner.clean()
) to be able to distinguish between explicitclose()
and cleanup through a cleaner (for example,clean()
can log a warning about the objectnot being closed explicitly before freeing the resources), is it ensured that even if thecleanuplogic is called concurrently from multiple threads, the actual cleanup is performed only once? Thecleanup logic in such classes should obviously be idempotent because it’s usually expected to becalled twice: the first time from theclose()
method and the second time from the cleaner orfinalize()
. The catch is that the cleanupmust be concurrently idempotent, even ifclose()
isnever called concurrently on objects of the class. That’s because the garbage collector mayconsider the object to become unreachable before the end of aclose()
call and initiate cleanupthrough the cleaner orfinalize()
whileclose()
is still being executed.
Alternatively,close()
could simply delegate toCleanable.clean()
(sun.misc.Cleaner.clean()
)which is concurrently idempotent itself. But then it’s impossible to distinguish between explicitand automatic cleanup.
See alsoJLS 12.6.2.
# CN.2. In a class with some native state that has a cleaner or overridesfinalize()
, arebodies of all methods that interact with the native state wrapped withtry { ... } finally { Reference.reachabilityFence(this); }
,including constructors and theclose()
method, but excludingfinalize()
? This is needed becausean object could become unreachable and the native memory might be freed from the cleaner while themethod that interacts with the native state is being executed, that might lead to use-after-free orJVM memory corruption.
reachabilityFence()
inclose()
also eliminates the race betweenclose()
and the cleanupexecuted through the cleaner orfinalize()
(see the previous item), but it may be a good idea toretain the thread safety precautions in the cleanup procedure, especially if the class in questionbelongs to the public API of the project because otherwise ifclose()
is accidentally ormaliciously called concurrently from multiple threads, the JVM might crash due to double memory freeor, worse, memory might be silently corrupted, while the promise of the Java platform is thatwhatever buggy some code is, as long as it passes bytecode verification, thrown exceptions should bethe worst possible outcome, but the virtual machine shouldn’t crash.CN.4also stresses on this principle.
Reference.reachabilityFence()
has been added in JDK 9. If the project targets JDK 8 and HotspotJVM,any method with an empty body is an effective emulation ofreachabilityFence()
.
See the documentation forReference.reachabilityFence()
andthis discussionin the concurrency-interest mailing list for more information.
# CN.3. Aren’t there classes that havecleaners or overridefinalize()
notto free native resources, but merely to return heap objects to some pools, or merely to reportthat some heap objects are not returned to some pools? This is an antipattern because of thetremendous complexity of using cleaners andfinalize()
correctly (see the previous two items) andthe negative impact on performance (especially offinalize()
), that might be even larger than theimpact of not returning objects back to some pool and thus slightly increasing the garbageallocation rate in the application. If the latter issue arises to be any important, it should betterbe diagnosed withasync-profiler in theallocation profiling mode (-e alloc) than by registering cleaners or overridingfinalize()
.
This advice also applies when pooled objects are direct ByteBuffers or other Java wrappers of nativememory chunks.async-profiler -e malloccould be used in such cases to detect direct memory leaks.
# CN.4. If someclasses have some state in native memory and are usedactively in concurrent code, or belong to the public API of the project, was it considered makingthem thread-safe? As described inCN.2, if objects of such classes areinadvertently accessed from multiple threads without proper synchronization, memory corruption andJVM crashes might result. This is why classes in the JDK such asjava.util.zip.Deflater
use synchronization internally despiteDeflater
objects are not intended to be used concurrentlyfrom multiple threads.
Note that making classes with some state in native memory thread-safe also implies that thenativestate should be safely published in constructors. This means that either the native state shouldbe stored exclusively infinal
fields, orVarHandle.storeStoreFence()
should be called in constructors after full initialization of the native state. If the projecttargets JDK 9 andVarHandle
is not available, the same effect could be achieved by wrappingconstructors’ bodies insynchronized (this) { ... }
.
# Bonus: isforbidden-apis configured for the project and arejava.util.StringBuffer
,java.util.Random
andMath.random()
prohibited?StringBuffer
andRandom
are thread-safe and all their methods are synchronized, which is never useful in practiceand only inhibits the performance. In OpenJDK,Math.random()
delegates to a global staticRandom
instance.StringBuilder
should be used instead ofStringBuffer
,ThreadLocalRandom
orSplittableRandom
should be used instead ofRandom
(see alsoT.2).
- [JLS] Java Language Specification,Memory Model and
final
fieldsemantics. - [EJ] "Effective Java" by Joshua Bloch, Chapter 11. Concurrency.
- [JCIP] "Java Concurrency in Practice" by Brian Goetz, Tim Peierls, Joshua Bloch, Joseph Bowbeer,David Holmes, and Doug Lea.
- Posts by Aleksey Shipilёv:
- When to use parallel streamswritten by Doug Lea, with the help of Brian Goetz, Paul Sandoz, Aleksey Shipilev, Heinz Kabutz,Joe Bowbeer, …
- SEI CERT Oracle Coding Standard for Java:
- C++:Concurrency and parallelism sectionin C++ Core Guidelines.
- Go:Concurrency section inEffectiveGo.
This checklist was originally published as apost on Medium.
The following people contributed ideas and comments about this checklist before it was imported toGithub:
- Roman Leventov
- Marko Topolnik
- Matko Medenjak
- Chris Vest
- Simon Willnauer
- Ben Manes
- Gleb Smirnov
- Andrey Satarin
- Benedict Jin
- Petr Janeček
The ideas for some items are taken from "Java Concurrency Gotchas" presentation byAlexMiller andWhat is the most frequent concurrency issue you'veencountered in Java? question on StackOverflow (thanksto Alex Miller who created this question and the contributors).
At the moment when this checklist was imported to Github, all text was written by Roman Leventov.
The checklist is not considered complete, comments and contributions are welcome!
This checklist is public domain. By submitting a PR to this repository contributors agree to releasetheir writing to public domain.
About
Checklist for code reviews
Topics
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors2
Uh oh!
There was an error while loading.Please reload this page.