public class HardReferenceQueueWithBatchingUpdates<T> extends Object implements IHardReferenceQueue<T>
IHardReferenceQueue
s to batch
updates and thus minimize thread contention for the lock required to
synchronize calls to add(Object)
.DEFAULT_NSCAN
Constructor and Description |
---|
HardReferenceQueueWithBatchingUpdates(boolean threadLocalBuffers,
int concurrencyLevel,
IHardReferenceQueue<T> sharedQueue,
int threadLocalQueueNScan,
int threadLocalQueueCapacity,
int threadLocalTryLockSize,
com.bigdata.cache.IBatchedUpdateListener<T> batchedUpdateListener) |
HardReferenceQueueWithBatchingUpdates(IHardReferenceQueue<T> sharedQueue,
int threadLocalQueueNScan,
int threadLocalQueueCapacity,
int threadLocalTryLockSize,
com.bigdata.cache.IBatchedUpdateListener<T> batchedUpdateListener)
Designated constructor.
|
Modifier and Type | Method and Description |
---|---|
boolean |
add(T ref)
Adds the reference to the thread-local queue, returning
true
iff the queue was modified as a result. |
int |
capacity()
The capacity of the shared queue.
|
void |
clear(boolean clearRefs)
Discards the thread-local buffers and clears the backing ring buffer.
|
boolean |
contains(Object ref)
Not supported.
|
boolean |
evict()
Not supported.
|
void |
evictAll(boolean clearRefs)
Not supported.
|
boolean |
isEmpty()
Not supported.
|
boolean |
isFull()
Not supported.
|
int |
nscan()
The nscan value of the shared queue.
|
boolean |
offer(T ref)
Offers the reference to the thread-local queue, returning
true iff the queue was modified as a result. |
T |
peek()
Not supported.
|
int |
size()
The size of the shared queue (approximate).
|
public HardReferenceQueueWithBatchingUpdates(IHardReferenceQueue<T> sharedQueue, int threadLocalQueueNScan, int threadLocalQueueCapacity, int threadLocalTryLockSize, com.bigdata.cache.IBatchedUpdateListener<T> batchedUpdateListener)
sharedQueue
- The backing IHardReferenceQueue
.threadLocalQueueNScan
- The #of references to scan on the thread-local queue.threadLocalQueueCapacity
- The capacity of the thread-local queues in which the updates
are gathered before they are batched into the shared queue.
This must be at leastthreadLocalTryLockSize
- Once the thread-local queue is this full an attempt will be
made to barge in on the lock and batch the updates to the
shared queue. This feature may be disabled by passing ZERO
(0).public HardReferenceQueueWithBatchingUpdates(boolean threadLocalBuffers, int concurrencyLevel, IHardReferenceQueue<T> sharedQueue, int threadLocalQueueNScan, int threadLocalQueueCapacity, int threadLocalTryLockSize, com.bigdata.cache.IBatchedUpdateListener<T> batchedUpdateListener)
public int size()
size
in interface IHardReferenceQueue<T>
public int capacity()
capacity
in interface IHardReferenceQueue<T>
public int nscan()
nscan
in interface IHardReferenceQueue<T>
public boolean evict()
evict
in interface IHardReferenceQueue<T>
HardReferenceQueueEvictionListener
public void evictAll(boolean clearRefs)
evictAll
in interface IHardReferenceQueue<T>
clearRefs
- When true, the reference are actually cleared from the cache.
This may be false to force persistence of the references in
the cache without actually clearing the cache.public boolean isEmpty()
isEmpty
in interface IHardReferenceQueue<T>
public boolean isFull()
isFull
in interface IHardReferenceQueue<T>
public T peek()
peek
in interface IHardReferenceQueue<T>
public final boolean add(T ref)
true
iff the queue was modified as a result.
When using true thread-local buffers, this is non-blocking unless the thread-local queue is full. If the thread-local queue is full, the existing references will be batched first onto the shared queue.
Contention can arise when using striped locks. For the synthetic test (on
a 2 core laptop with 8 threads), implementing using per-thread
BatchQueue
s scores 6,984,896
ops/sec whereas
implementing using striped locks the performance score is only
4,654,673
. One thread on the laptop has a throughput of
4,856,814
, the maximum possible throughput for 2 threads is
~ 9M. The actual performance of the striped locks approach depends on the
degree of collision in the Thread.getId()
values and the #of
BatchQueue
instances in the array.
While striped locks clearly have less throughput when compared to thread-
local BatchQueue
s, the striped lock performance can be
significantly better than implementations without lock amortization
strategies and we do not have to worry about references on
BatchQueue
s "escaping" when we rarely see requests for some
threads (which is basically a memory leak).
add
in interface IHardReferenceQueue<T>
ref
- The reference to be added.BatchQueue
.InterruptedException
- TODO Actually, using the CHM (7M ops/sec) rather than
acquiring a permit (4.7M ops/sec) is MUCH less expensive even
when using only one thread. The permit is clearly costing us.
Striped locks might do as well as thread-local locks if we
could replace the permit with a less expensive lock.public final boolean offer(T ref)
true
iff the queue was modified as a result. This is
non-blocking unless the thread-local queue is full. If the thread-local
queue is full, the existing references will be batched first onto the
shared queue.public final void clear(boolean clearRefs)
Note: This method can have side-effects from asynchronous operations if the queue is still in use.
clear
in interface IHardReferenceQueue<T>
clearRefs
- When true
the references are explicitly set to
null
which can facilitate garbage collection.public final boolean contains(Object ref)
Copyright © 2006–2019 SYSTAP, LLC DBA Blazegraph. All rights reserved.