public class LRUCache<K,T> extends Object implements ICachePolicy<K,T>
| Modifier and Type | Class and Description |
|---|---|
protected static interface |
LRUCache.ICacheOrderChangeListener<K,T> |
| Modifier and Type | Field and Description |
|---|---|
protected static boolean |
INFO |
protected static org.apache.log4j.Logger |
log |
| Constructor and Description |
|---|
LRUCache(int capacity)
Create an LRU cache with a default load factor of
0.75. |
LRUCache(int capacity,
float loadFactor)
Create an LRU cache with the specific capacity and load factor.
|
| Modifier and Type | Method and Description |
|---|---|
protected void |
addCacheOrderChangeListener(LRUCache.ICacheOrderChangeListener<K,T> l)
Registers a listener for removeEntry events.
|
int |
capacity()
The capacity of the cache.
|
void |
clear()
Clear all objects from the cache.
|
Iterator<ICacheEntry<K,T>> |
entryIterator()
Visits
entries in the cache in LRU ordering (the
least recently used object is visited first). |
protected void |
finalize()
Writes cache performance statistics.
|
T |
get(K key)
Return the indicated object from the cache or null if the object is not
in cache.
|
ICacheListener<K,T> |
getCacheListener()
Return the cache eviction listener.
|
double |
getHitRatio() |
long |
getInsertCount() |
String |
getStatistics() |
long |
getSuccessCount() |
long |
getTestCount() |
Iterator<T> |
iterator()
Visits objects in the cache in LRU ordering (the least recently used
object is visited first).
|
void |
put(K key,
T obj,
boolean dirty)
Add the object to the hash map under the key if it is not already there
and update the entry ordering (this can be used to touch an entry).
|
T |
remove(K key)
Remove the indicated object from the cache.
|
protected void |
removeCacheOrderChangeListener(LRUCache.ICacheOrderChangeListener<K,T> l)
Unregister the listener.
|
void |
resetStatistics() |
void |
setListener(ICacheListener<K,T> listener)
Sets the cache eviction listener on the hard reference cache.
|
int |
size()
The #of entries in the cache.
|
protected static final org.apache.log4j.Logger log
protected static final boolean INFO
public LRUCache(int capacity)
0.75.capacity - The capacity of the cache.public LRUCache(int capacity,
float loadFactor)
capacity - The capacity of the cache (must be positive).loadFactor - The load factor for the internal hash table.public double getHitRatio()
public long getInsertCount()
public long getTestCount()
public long getSuccessCount()
public void setListener(ICacheListener<K,T> listener)
ICachePolicysetListener in interface ICachePolicy<K,T>listener - The listener or null to remove any listener.public ICacheListener<K,T> getCacheListener()
ICachePolicygetCacheListener in interface ICachePolicy<K,T>public Iterator<T> iterator()
Visits objects in the cache in LRU ordering (the least recently used object is visited first).
The returned iterator is NOT thread safe. It supports removal but does NOT support concurrent modification of the cache state. Normally the iterator is used during a commit and the framework guarantees that concurrent
iterator in interface ICachePolicy<K,T>ICachePolicy.entryIterator()public Iterator<ICacheEntry<K,T>> entryIterator()
Visits entries in the cache in LRU ordering (the
least recently used object is visited first).
The returned iterator is NOT thread safe. It supports removal but does NOT support concurrent modification of the cache state. Normally the iterator is used during a commit and the framework guarantees that concurrent
entryIterator in interface ICachePolicy<K,T>ICacheEntry objects. If this is a weak
reference cache, then the iterator visits the entries in the
delegate hard reference cache.ICacheEntry,
ICachePolicy.iterator()public void clear()
ICachePolicyclear in interface ICachePolicy<K,T>public void resetStatistics()
public int size()
size in interface ICachePolicy<K,T>public int capacity()
capacity in interface ICachePolicy<K,T>protected void finalize()
throws Throwable
public String getStatistics()
public void put(K key, T obj, boolean dirty)
Add the object to the hash map under the key if it is not already there and update the entry ordering (this can be used to touch an entry).
Cache evictions are only performed at or over capacity, but not
for reentrant invocations. If a cache eviction causes a nested
#put(long, Object, boolean) cache enters a temporary
over capacity condition. The nested eviction is effectively
deferred and a new cache entry is created for the incoming object rather
than recycling the LRU cache entry. This temporary over capacity state
exists until the primary eviction event has been handled, at which point
entries are purged from the cache until it has one free entry. That free
entry is then used to cache the incoming object which triggered the outer
eviction event.
This is not the only coherent manner in which nested eviction events could be handled, but it is perhaps the simplest. This technique MUST NOT be used with an open array hash table since the temporary over capacity condition would not be supported.
put in interface ICachePolicy<K,T>key - The object identifier.obj - The object.dirty - True iff the object is dirty.public T get(K key)
ICachePolicyget in interface ICachePolicy<K,T>key - The object identifier.public T remove(K key)
ICachePolicyremove in interface ICachePolicy<K,T>key - The object identifier.null if there was no object under that identifier.protected void addCacheOrderChangeListener(LRUCache.ICacheOrderChangeListener<K,T> l)
LRUIterator to handle concurrent modifications of the cache
ordering during traversal.protected void removeCacheOrderChangeListener(LRUCache.ICacheOrderChangeListener<K,T> l)
l - The listener.Copyright © 2006–2019 SYSTAP, LLC DBA Blazegraph. All rights reserved.