Class InMemoryPartition<T>
- java.lang.Object
-
- org.apache.flink.runtime.operators.hash.InMemoryPartition<T>
-
- Type Parameters:
T- record type
public class InMemoryPartition<T> extends Object
In-memory partition with overflow buckets forCompactingHashTable
-
-
Field Summary
Fields Modifier and Type Field Description protected intnextOverflowBucketprotected intnumOverflowSegmentsprotected org.apache.flink.core.memory.MemorySegment[]overflowSegments
-
Constructor Summary
Constructors Constructor Description InMemoryPartition(org.apache.flink.api.common.typeutils.TypeSerializer<T> serializer, int partitionNumber, ListMemorySegmentSource memSource, int pageSize, int pageSizeInBits)Creates a new partition, in memory, with one buffer.
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description voidallocateSegments(int numberOfSegments)attempts to allocate specified number of segments and should only be used by compaction partition fails silently if not enough segments are available since next compaction could still succeedlongappendRecord(T record)Inserts the given object into the current buffer.voidclearAllMemory(List<org.apache.flink.core.memory.MemorySegment> target)releases all of the partition's segments (pages and overflow buckets)intgetBlockCount()intgetPartitionNumber()Gets the partition number of this partition.longgetRecordCount()number of records in partition including garbagebooleanisCompacted()voidoverwriteRecordAt(long pointer, T record)Deprecated.Don't use this, overwrites record and causes inconsistency or data loss for overwriting everything but records of the exact same sizevoidpushDownPages()TreadRecordAt(long pointer)TreadRecordAt(long pointer, T reuse)ArrayList<org.apache.flink.core.memory.MemorySegment>resetOverflowBuckets()resets overflow bucket counters and returns freed memory and should only be used for resizingvoidresetRecordCounter()sets record counter to zero and should only be used on compaction partitionvoidresetRWViews()resets read and write views and should only be used on compaction partitionvoidsetIsCompacted(boolean compacted)sets compaction status (should only be settruedirectly after compaction andfalsewhen garbage was created)voidsetPartitionNumber(int number)overwrites partition number and should only be used on compaction partitionStringtoString()
-
-
-
Constructor Detail
-
InMemoryPartition
public InMemoryPartition(org.apache.flink.api.common.typeutils.TypeSerializer<T> serializer, int partitionNumber, ListMemorySegmentSource memSource, int pageSize, int pageSizeInBits)
Creates a new partition, in memory, with one buffer.- Parameters:
serializer- Serializer for T.partitionNumber- The number of the partition.memSource- memory poolpageSize- segment size in bytespageSizeInBits-
-
-
Method Detail
-
getPartitionNumber
public int getPartitionNumber()
Gets the partition number of this partition.- Returns:
- This partition's number.
-
setPartitionNumber
public void setPartitionNumber(int number)
overwrites partition number and should only be used on compaction partition- Parameters:
number- new partition
-
getBlockCount
public int getBlockCount()
- Returns:
- number of segments owned by partition
-
getRecordCount
public long getRecordCount()
number of records in partition including garbage- Returns:
- number record count
-
resetRecordCounter
public void resetRecordCounter()
sets record counter to zero and should only be used on compaction partition
-
resetRWViews
public void resetRWViews()
resets read and write views and should only be used on compaction partition
-
pushDownPages
public void pushDownPages()
-
resetOverflowBuckets
public ArrayList<org.apache.flink.core.memory.MemorySegment> resetOverflowBuckets()
resets overflow bucket counters and returns freed memory and should only be used for resizing- Returns:
- freed memory segments
-
isCompacted
public boolean isCompacted()
- Returns:
- true if garbage exists in partition
-
setIsCompacted
public void setIsCompacted(boolean compacted)
sets compaction status (should only be settruedirectly after compaction andfalsewhen garbage was created)- Parameters:
compacted- compaction status
-
appendRecord
public final long appendRecord(T record) throws IOException
Inserts the given object into the current buffer. This method returns a pointer that can be used to address the written record in this partition.- Parameters:
record- The object to be written to the partition.- Returns:
- A pointer to the object in the partition.
- Throws:
IOException- Thrown when the write failed.
-
readRecordAt
public T readRecordAt(long pointer, T reuse) throws IOException
- Throws:
IOException
-
readRecordAt
public T readRecordAt(long pointer) throws IOException
- Throws:
IOException
-
overwriteRecordAt
@Deprecated public void overwriteRecordAt(long pointer, T record) throws IOException
Deprecated.Don't use this, overwrites record and causes inconsistency or data loss for overwriting everything but records of the exact same sizeUNSAFE!! overwrites record causes inconsistency or data loss for overwriting everything but records of the exact same size- Parameters:
pointer- pointer to start of recordrecord- record to overwrite old one with- Throws:
IOException
-
clearAllMemory
public void clearAllMemory(List<org.apache.flink.core.memory.MemorySegment> target)
releases all of the partition's segments (pages and overflow buckets)- Parameters:
target- memory pool to release segments to
-
allocateSegments
public void allocateSegments(int numberOfSegments)
attempts to allocate specified number of segments and should only be used by compaction partition fails silently if not enough segments are available since next compaction could still succeed- Parameters:
numberOfSegments- allocation count
-
-