public final class BTreeCounters extends Object implements Cloneable, ICounterSetAccess
AbstractBTree
.
Note: This class DOES NOT have a hard reference to the AbstractBTree
.
Holding an instance of this class WILL NOT force the AbstractBTree
to
remain strongly reachable.
Note: Counters for mutation are plain fields. Counters for read-only
operations are AtomicLong
or AtomicInteger
objects since read
operations may involve high concurrently and could otherwise lead to lost
counter updates.
Modifier and Type | Field and Description |
---|---|
AtomicLong |
bytesOnStore_nodesAndLeaves
The #of bytes in node and leaf) records on the backing store for the
unisolated view of the index.
|
AtomicLong |
bytesOnStore_rawRecords
The #of bytes in the unisolated view of the index which are being used to
store raw records.
|
CAT |
bytesRead |
CAT |
bytesReleased |
CAT |
bytesWritten |
CAT |
cacheMisses
#of misses when testing the BTree cache (getChild()).
|
CAT |
cacheTests
#of tests of the BTree cache (getChild()).
|
CAT |
deserializeNanos
De-serialization time for nodes and leaves.
|
int |
headSplit |
int |
leavesCopyOnWrite |
int |
leavesJoined |
CAT |
leavesRead
#of leaf read operations.
|
int |
leavesSplit |
CAT |
leavesWritten |
CAT |
ngetKey |
CAT |
ngetValue |
CAT |
nindexOf |
AtomicLong |
ninserts
#of keys looked up in the tree by contains/lookup(key) (does not count
those rejected by the bloom filter before they are tested against the
B+Tree).
|
int |
nodesCopyOnWrite |
int |
nodesJoined |
CAT |
nodesRead
#of node read operations.
|
int |
nodesSplit |
CAT |
nodesWritten |
CAT |
nrangeCount |
CAT |
nrangeIterator |
AtomicLong |
nremoves |
long |
ntupleInsertDelete
#of deleted tuples that were inserted into the B+Tree (rather than
deleting the value for an existing tuple).
|
long |
ntupleInsertValue
#of non-deleted tuples that were inserted into the B+Tree (rather than
updating the value for an existing tuple).
|
long |
ntupleRemove
#of pre-existing tuples that were removed from the B+Tree (only non-zero
when the B+Tree does not support delete markers).
|
long |
ntupleUpdateDelete
#of pre-existing un-deleted tuples whose delete marker was set (we don't
count re-deletes of an already deleted tuple).
|
long |
ntupleUpdateValue
#of pre-existing tuples whose value was updated to a non-deleted value
(includes update of a deleted tuple to a non-deleted tuple by overwrite
of the tuple).
|
CAT |
rawRecordsBytesRead
Total bytes read for raw records.
|
CAT |
rawRecordsBytesWritten |
CAT |
rawRecordsRead
The #of raw record read operations.
|
CAT |
rawRecordsWritten |
CAT |
readNanos
Read time for nodes and leaves (but not raw records).
|
int |
rootsJoined |
int |
rootsSplit |
CAT |
serializeNanos |
int |
tailSplit |
CAT |
writeNanos |
Constructor and Description |
---|
BTreeCounters() |
BTreeCounters(BTreeCounters c)
Copy constructor.
|
Modifier and Type | Method and Description |
---|---|
void |
add(BTreeCounters o)
Adds the values from another
BTreeCounters object to this one. |
BTreeCounters |
clone() |
double |
computeRawReadScore()
Return a score whose increasing value is correlated with the amount of
read activity on an index as reflected in these
BTreeCounters . |
double |
computeRawReadWriteScore()
Return a score whose increasing value is correlated with the amount of
read/write activity on an index as reflected in these
BTreeCounters . |
double |
computeRawWriteScore()
Return a score whose increasing value is correlated with the amount of
write activity on an index as reflected in these
BTreeCounters . |
boolean |
equals(Object o)
Equal iff they are the same instance.
|
long |
getBytesRead()
The number of bytes read from the backing store.
|
long |
getBytesReleased()
The number of bytes released from the backing store.
|
long |
getBytesWritten()
The number of bytes written onto the backing store.
|
CounterSet |
getCounters()
Return a
CounterSet reporting on the various counters tracked in
the instance fields of this class. |
long |
getLeavesWritten()
The #of leaves written on the backing store.
|
long |
getNodesWritten()
The #of nodes written on the backing store.
|
static double |
normalize(double rawScore,
double totalRawScore)
Normalizes a raw score in the context of totals for some data service.
|
BTreeCounters |
subtract(BTreeCounters o)
Subtracts the given counters from the current counters, returning a new
counter object containing their difference.
|
String |
toString() |
public final AtomicLong ninserts
public final AtomicLong nremoves
public final CAT nindexOf
public final CAT ngetKey
public final CAT ngetValue
public final CAT nrangeCount
public final CAT nrangeIterator
public int rootsSplit
public int rootsJoined
public int nodesSplit
public int nodesJoined
public int leavesSplit
public int leavesJoined
public int tailSplit
public int headSplit
public int nodesCopyOnWrite
public int leavesCopyOnWrite
public long ntupleInsertValue
public long ntupleInsertDelete
public long ntupleUpdateValue
public long ntupleUpdateDelete
public long ntupleRemove
public final CAT cacheTests
public final CAT cacheMisses
public final CAT nodesRead
public final CAT leavesRead
public final CAT bytesRead
public final CAT readNanos
public final CAT deserializeNanos
public final CAT rawRecordsRead
public final CAT rawRecordsBytesRead
public CAT nodesWritten
public CAT leavesWritten
public CAT bytesWritten
public CAT bytesReleased
public CAT writeNanos
public CAT serializeNanos
public CAT rawRecordsWritten
public CAT rawRecordsBytesWritten
public final AtomicLong bytesOnStore_rawRecords
public final AtomicLong bytesOnStore_nodesAndLeaves
public BTreeCounters()
public BTreeCounters(BTreeCounters c)
c
- public boolean equals(Object o)
public BTreeCounters clone()
public void add(BTreeCounters o)
BTreeCounters
object to this one.o
- public BTreeCounters subtract(BTreeCounters o)
o
- public double computeRawReadWriteScore()
BTreeCounters
.
The score is the serialization / deserialization time plus the read / write time. Time was chosen since it is a common unit and since it reflects the combination of CPU time, memory time (for allocations and garbage collection - the latter can be quite significant), and the disk wait time. The other main component of time is key search, but that is not instrumented right now.
Serialization and deserialization are basically a CPU activity and drive memory to the extent that allocations are made, especially during deserialization.
The read/write time is strongly dominated by actual DISK IO and by garbage collection time (garbage collection can cause threads to be suspended at any time). For deep B+Trees, DISK READ time dominates DISK WRITE time since increasing numbers of random reads are required to materialize any given leaf.
The total read-write cost (in seconds) for a BTree is one of the factors that is considered when choosing which index partition to move. Comparing the total read/write cost for BTrees across a database can help to reveal which index partitions have the heaviest load. If only a few index partitions for a given scale-out index have a heavy load, then those index partitions are hotspots for that index.
public double computeRawReadScore()
BTreeCounters
.
The score is deserializeNanos
+ readNanos
.computeRawReadWriteScore()
public double computeRawWriteScore()
BTreeCounters
.
The score is serializeNanos
plus writeNanos
.computeRawReadWriteScore()
public static double normalize(double rawScore, double totalRawScore)
rawScore
- The raw score.totalRawScore
- The raw score computed from the totals.public final long getNodesWritten()
public final long getLeavesWritten()
public final long getBytesRead()
public final long getBytesWritten()
public final long getBytesReleased()
public CounterSet getCounters()
CounterSet
reporting on the various counters tracked in
the instance fields of this class.getCounters
in interface ICounterSetAccess
Copyright © 2006–2019 SYSTAP, LLC DBA Blazegraph. All rights reserved.