Concurrency is possible irrespective of cores or even simple
High Level Concurrency is the second Thread strategy abstract
thread management by passing the application's tasks using
Self-contained execution environment
Private set of resources
Own memory space
Communication happen through Inter Process Communication (IPC)
Java application can create additional processes using a
Lock object -
simplify many concurrent applications
Lock objects work very much like the implicit locks used by
Lock objects also support a wait/notify mechanism
Advantage - over implicit locks(internal) is their ability to
back out of an attempt to acquire a lock.
Executors - Executor Interfaces(three executor object types),
thread Pools and Fork/Join(v1.7, adv of multiple processors)
Executor Interfaces(three executor object types)-
Simple interface that supports launching new tasks
Executor interface provides a single method - execute
Runnable object - r and Executor object - e then replace
(new Thread(r)).start(); --> with e.execute(r);
Fork/join framework is an implementation of ExecutorService
implements execute(). Also submit() method with many options
such as accepts Runnable, Callable objects, task to return a
value. The submit method returns a Future object, which is used to
retrieve the Callable return value and to manage the status of
both Callable and Runnable tasks.
managing the shutdown of the executor.
run after a given delay, or to execute periodically
The scheduleAtFixedRate(), scheduleWithFixedDelay() methods
create and execute tasks that run periodically until cancelled.
take advantage of multiple processors.
Work that can be broken into smaller pieces recursively.
Huge numbers of tasks and subtasks may be hosted by a small
number of actual threads in a ForkJoinPool
ForkJoinPool extends AbstractExecutorService class.
ForkJoinPool implements the core work-stealing algorithm and can
execute ForkJoinTask processes. Restricts the maximum number of
running threads to 32767(non-negative integer) else
ForkJoinTask is a lightweight form of Future.
ForkJoinTask primary coordination mechanisms are fork() that
arranges asynchronous execution and join() that doesn't proceed
until the task's result has been computed.
While using ForkJoinTask ideally avoid synchronized methods,
other blocks ex
The ForkJoin common pool is used to execute any parallel tasks.
e.g. java.util.Arrays class for its parallelSort() methods(Java 8).
java.util.streams package which is part of Project Lambda.
java.util.concurrent and its subpackages extend these guarantees
to higher-level synchronization (happens-before relationships).
Actions in a thread prior to placing an object into any
concurrent collection happen-before actions subsequent to the
access or removal of that element from the collection in another
Actions in a thread prior to the submission of a Runnable to an
Executor happen-before its execution begins. Similarly for
Callables submitted to an ExecutorService.
Actions taken by the asynchronous computation represented by a
Future happen-before actions subsequent to the retrieval of the
result via Future.get() in another thread.
Lock.unlock, Semaphore.release and CountDownLatch.countDown
happen-before actions subsequent to a successful 'acquiring'
method such as Lock.lock, Semaphore.acquire, Condition.await and
CountDownLatch.await on the same synchronizer object in another
For each pair of threads that successfully exchange objects via
an Exchanger, actions prior to the exchange() in each thread
happen-before those subsequent to the corresponding exchange() in
Actions prior to calling CyclicBarrier.await and
Phaser.awaitAdvance happen-before actions performed by the barrier
action and actions performed by the barrier action happen-before
actions subsequent to a successful return from the corresponding
await in other threads.
All operations are thread-safe. Same functional specification
Supporting full concurrency of retrievals(get - do not entail
locking) and adjustable expected concurrency for updates.
The table is internally partitioned to try to permit the
default 16 number of concurrent updates without contention. Best
practice to provide estimates of expected table sizes in
constructors otherwise it will slow down the operation.
Ideally, you should choose a value to accommodate as many
threads as will ever concurrently modify the table. Using a
significantly higher value than you need can waste space and time,
and a significantly lower(relatively slow operation) value can
lead to thread contention.
first-in-first-out data structure that blocks/times out when
you attempt to add to a full queue/retrieve from an empty queue
Not accept null values and throw NullPointerException,
thread-safe, queuing methods are atomic, use internal locks.
implementations such as ArrayBlockingQueue,
LinkedBlockingQueue, PriorityBlockingQueue, SynchronousQueue
ThreadPoolExecutor class -
automatically adjust the pool size
even core threads are initially created and started only when
new tasks arrive but can be overridden dynamically using
prestartCoreThread() or prestartAllCoreThreads().