Class DynamicKafkaSourceEnumerator
- java.lang.Object
-
- org.apache.flink.connector.kafka.dynamic.source.enumerator.DynamicKafkaSourceEnumerator
-
- All Implemented Interfaces:
AutoCloseable,org.apache.flink.api.common.state.CheckpointListener,org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
@Internal public class DynamicKafkaSourceEnumerator extends Object implements org.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
This enumerator manages multipleKafkaSourceEnumerator's, which does not have any synchronization since it assumes single threaded execution.
-
-
Constructor Summary
Constructors Constructor Description DynamicKafkaSourceEnumerator(KafkaStreamSubscriber kafkaStreamSubscriber, KafkaMetadataService kafkaMetadataService, org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext, OffsetsInitializer startingOffsetsInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.Boundedness boundedness, DynamicKafkaSourceEnumState dynamicKafkaSourceEnumState)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description voidaddReader(int subtaskId)NOTE: this happens at startup and failover.voidaddSplitsBack(List<DynamicKafkaSourceSplit> splits, int subtaskId)voidclose()voidhandleSourceEvent(int subtaskId, org.apache.flink.api.connector.source.SourceEvent sourceEvent)voidhandleSplitRequest(int subtaskId, String requesterHostname)Multi cluster Kafka source readers will not request splits.DynamicKafkaSourceEnumStatesnapshotState(long checkpointId)Besides for checkpointing, this method is used in the restart sequence to retain the relevant assigned splits so that there is no reader duplicate split assignment.voidstart()Discover Kafka clusters and initialize sub enumerators.-
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
-
-
-
-
Constructor Detail
-
DynamicKafkaSourceEnumerator
public DynamicKafkaSourceEnumerator(KafkaStreamSubscriber kafkaStreamSubscriber, KafkaMetadataService kafkaMetadataService, org.apache.flink.api.connector.source.SplitEnumeratorContext<DynamicKafkaSourceSplit> enumContext, OffsetsInitializer startingOffsetsInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.Boundedness boundedness, DynamicKafkaSourceEnumState dynamicKafkaSourceEnumState)
-
-
Method Detail
-
start
public void start()
Discover Kafka clusters and initialize sub enumerators. Bypass kafka metadata service discovery if there exists prior state. Exceptions with initializing Kafka source are treated the same as Kafka state and metadata inconsistency.- Specified by:
startin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
handleSplitRequest
public void handleSplitRequest(int subtaskId, @Nullable String requesterHostname)Multi cluster Kafka source readers will not request splits. Splits will be pushed to them, similarly for the sub enumerators.- Specified by:
handleSplitRequestin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
addSplitsBack
public void addSplitsBack(List<DynamicKafkaSourceSplit> splits, int subtaskId)
- Specified by:
addSplitsBackin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
addReader
public void addReader(int subtaskId)
NOTE: this happens at startup and failover.- Specified by:
addReaderin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
snapshotState
public DynamicKafkaSourceEnumState snapshotState(long checkpointId) throws Exception
Besides for checkpointing, this method is used in the restart sequence to retain the relevant assigned splits so that there is no reader duplicate split assignment. SeecreateEnumeratorWithAssignedTopicPartitions(String, Set, KafkaSourceEnumState, Properties)}- Specified by:
snapshotStatein interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>- Throws:
Exception
-
handleSourceEvent
public void handleSourceEvent(int subtaskId, org.apache.flink.api.connector.source.SourceEvent sourceEvent)- Specified by:
handleSourceEventin interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>
-
close
public void close() throws IOException- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceorg.apache.flink.api.connector.source.SplitEnumerator<DynamicKafkaSourceSplit,DynamicKafkaSourceEnumState>- Throws:
IOException
-
-