Class DynamicKafkaSourceEnumerator
java.lang.Object
org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
- All Implemented Interfaces:
AutoCloseable,org.apache.flink.api.common.state.CheckpointListener,org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
@Internal
public class DynamicKafkaSourceEnumerator
extends Object
implements org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
This enumerator manages multiple
KafkaSourceEnumerator's, which does not have any
synchronization since it assumes single threaded execution.-
Constructor Summary
ConstructorsConstructorDescriptionDynamicKafkaSourceEnumerator(KafkaStreamSubscriber kafkaStreamSubscriber, KafkaMetadataService kafkaMetadataService, org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext, OffsetsInitializer startingOffsetsInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.Boundedness boundedness, DynamicKafkaSourceEnumState dynamicKafkaSourceEnumState) -
Method Summary
Modifier and TypeMethodDescriptionvoidaddReader(int subtaskId) NOTE: this happens at startup and failover.voidaddSplitsBack(List<DynamicKafkaSourceSplit> splits, int subtaskId) voidclose()voidhandleSourceEvent(int subtaskId, org.apache.flink.api.connector.source.SourceEvent sourceEvent) voidhandleSplitRequest(int subtaskId, String requesterHostname) Multi cluster Kafka source readers will not request splits.snapshotState(long checkpointId) Besides for checkpointing, this method is used in the restart sequence to retain the relevant assigned splits so that there is no reader duplicate split assignment.voidstart()Discover Kafka clusters and initialize sub enumerators.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.flink.api.common.state.CheckpointListener
notifyCheckpointAbortedMethods inherited from interface org.apache.flink.api.connector.source.SplitEnumerator
notifyCheckpointComplete
-
Constructor Details
-
DynamicKafkaSourceEnumerator
public DynamicKafkaSourceEnumerator(KafkaStreamSubscriber kafkaStreamSubscriber, KafkaMetadataService kafkaMetadataService, org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext, OffsetsInitializer startingOffsetsInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.Boundedness boundedness, DynamicKafkaSourceEnumState dynamicKafkaSourceEnumState)
-
-
Method Details
-
start
public void start()Discover Kafka clusters and initialize sub enumerators. Bypass kafka metadata service discovery if there exists prior state. Exceptions with initializing Kafka source are treated the same as Kafka state and metadata inconsistency.- Specified by:
startin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
handleSplitRequest
Multi cluster Kafka source readers will not request splits. Splits will be pushed to them, similarly for the sub enumerators.- Specified by:
handleSplitRequestin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
addSplitsBack
- Specified by:
addSplitsBackin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
addReader
public void addReader(int subtaskId) NOTE: this happens at startup and failover.- Specified by:
addReaderin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
snapshotState
Besides for checkpointing, this method is used in the restart sequence to retain the relevant assigned splits so that there is no reader duplicate split assignment. SeecreateEnumeratorWithAssignedTopicPartitions(String, Set, KafkaSourceEnumState, Properties)}- Specified by:
snapshotStatein interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState> - Throws:
Exception
-
handleSourceEvent
public void handleSourceEvent(int subtaskId, org.apache.flink.api.connector.source.SourceEvent sourceEvent) - Specified by:
handleSourceEventin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
close
- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState> - Throws:
IOException
-