@Internal public class DynamicKafkaSourceReader<T> extends Object implements org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>
KafkaSourceReader to collect records and commit offsets
from multiple Kafka clusters. This reader also handles changes to Kafka topology by reacting to
restart sequence initiated by the enumerator and suspending inconsistent sub readers.
First, in the restart sequence, we will receive the MetadataUpdateEvent from the
enumerator, stop all KafkaSourceReaders, and retain the relevant splits. Second, enumerator will
send all new splits that readers should work on (old splits will not be sent again).
| Constructor and Description |
|---|
DynamicKafkaSourceReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext,
KafkaRecordDeserializationSchema<T> deserializationSchema,
Properties properties) |
| Modifier and Type | Method and Description |
|---|---|
void |
addSplits(List<DynamicKafkaSourceSplit> splits) |
void |
close() |
org.apache.flink.streaming.runtime.io.MultipleFuturesAvailabilityHelper |
getAvailabilityHelper() |
void |
handleSourceEvents(org.apache.flink.api.connector.source.SourceEvent sourceEvent)
Duplicate source events are handled with idempotency.
|
boolean |
isActivelyConsumingSplits() |
CompletableFuture<Void> |
isAvailable() |
void |
notifyCheckpointComplete(long checkpointId) |
void |
notifyNoMoreSplits() |
org.apache.flink.core.io.InputStatus |
pollNext(org.apache.flink.api.connector.source.ReaderOutput<T> readerOutput) |
List<DynamicKafkaSourceSplit> |
snapshotState(long checkpointId) |
void |
start()
This is invoked first only in reader startup without state.
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitpublic DynamicKafkaSourceReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext,
KafkaRecordDeserializationSchema<T> deserializationSchema,
Properties properties)
public void start()
start in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>public org.apache.flink.core.io.InputStatus pollNext(org.apache.flink.api.connector.source.ReaderOutput<T> readerOutput) throws Exception
pollNext in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>Exceptionpublic void addSplits(List<DynamicKafkaSourceSplit> splits)
addSplits in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>public void handleSourceEvents(org.apache.flink.api.connector.source.SourceEvent sourceEvent)
handleSourceEvents in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>public List<DynamicKafkaSourceSplit> snapshotState(long checkpointId)
snapshotState in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>public CompletableFuture<Void> isAvailable()
isAvailable in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>public void notifyNoMoreSplits()
notifyNoMoreSplits in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>public void notifyCheckpointComplete(long checkpointId)
throws Exception
notifyCheckpointComplete in interface org.apache.flink.api.common.state.CheckpointListenernotifyCheckpointComplete in interface org.apache.flink.api.connector.source.SourceReader<T,DynamicKafkaSourceSplit>Exceptionpublic void close()
throws Exception
close in interface AutoCloseableException@VisibleForTesting public org.apache.flink.streaming.runtime.io.MultipleFuturesAvailabilityHelper getAvailabilityHelper()
@VisibleForTesting public boolean isActivelyConsumingSplits()
Copyright © 2022–2024 The Apache Software Foundation. All rights reserved.