Uses of Class
org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
Packages that use KafkaPartitionSplit
Package
Description
-
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.dynamic.source.enumerator
Method parameters in org.apache.flink.connector.kafka.dynamic.source.enumerator with type arguments of type KafkaPartitionSplitModifier and TypeMethodDescriptionvoidStoppableKafkaEnumContextProxy.assignSplits(org.apache.flink.api.connector.source.SplitsAssignment<KafkaPartitionSplit> newSplitAssignments) Wrap splits with cluster metadata. -
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.dynamic.source.split
Subclasses of KafkaPartitionSplit in org.apache.flink.connector.kafka.dynamic.source.splitModifier and TypeClassDescriptionclassSplit that wrapsKafkaPartitionSplitwith Kafka cluster information.Methods in org.apache.flink.connector.kafka.dynamic.source.split that return KafkaPartitionSplitConstructors in org.apache.flink.connector.kafka.dynamic.source.split with parameters of type KafkaPartitionSplitModifierConstructorDescriptionDynamicKafkaSourceSplit(String kafkaClusterId, KafkaPartitionSplit kafkaPartitionSplit) -
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.source
Methods in org.apache.flink.connector.kafka.source that return types with arguments of type KafkaPartitionSplitModifier and TypeMethodDescriptionorg.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit, KafkaSourceEnumState> KafkaSource.createEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext) org.apache.flink.api.connector.source.SourceReader<OUT, KafkaPartitionSplit> KafkaSource.createReader(org.apache.flink.api.connector.source.SourceReaderContext readerContext) org.apache.flink.core.io.SimpleVersionedSerializer<KafkaPartitionSplit> KafkaSource.getSplitSerializer()org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit, KafkaSourceEnumState> KafkaSource.restoreEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext, KafkaSourceEnumState checkpoint) Method parameters in org.apache.flink.connector.kafka.source with type arguments of type KafkaPartitionSplitModifier and TypeMethodDescriptionorg.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit, KafkaSourceEnumState> KafkaSource.createEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext) org.apache.flink.api.connector.source.SplitEnumerator<KafkaPartitionSplit, KafkaSourceEnumState> KafkaSource.restoreEnumerator(org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> enumContext, KafkaSourceEnumState checkpoint) -
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.source.enumerator
Method parameters in org.apache.flink.connector.kafka.source.enumerator with type arguments of type KafkaPartitionSplitModifier and TypeMethodDescriptionvoidKafkaSourceEnumerator.addSplitsBack(List<KafkaPartitionSplit> splits, int subtaskId) Constructor parameters in org.apache.flink.connector.kafka.source.enumerator with type arguments of type KafkaPartitionSplitModifierConstructorDescriptionKafkaSourceEnumerator(KafkaSubscriber subscriber, OffsetsInitializer startingOffsetInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> context, org.apache.flink.api.connector.source.Boundedness boundedness) KafkaSourceEnumerator(KafkaSubscriber subscriber, OffsetsInitializer startingOffsetInitializer, OffsetsInitializer stoppingOffsetInitializer, Properties properties, org.apache.flink.api.connector.source.SplitEnumeratorContext<KafkaPartitionSplit> context, org.apache.flink.api.connector.source.Boundedness boundedness, KafkaSourceEnumState kafkaSourceEnumState) -
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.source.reader
Methods in org.apache.flink.connector.kafka.source.reader that return KafkaPartitionSplitModifier and TypeMethodDescriptionprotected KafkaPartitionSplitKafkaSourceReader.toSplitType(String splitId, KafkaPartitionSplitState splitState) Methods in org.apache.flink.connector.kafka.source.reader that return types with arguments of type KafkaPartitionSplitMethods in org.apache.flink.connector.kafka.source.reader with parameters of type KafkaPartitionSplitModifier and TypeMethodDescriptionprotected KafkaPartitionSplitStateKafkaSourceReader.initializedState(KafkaPartitionSplit split) Method parameters in org.apache.flink.connector.kafka.source.reader with type arguments of type KafkaPartitionSplitModifier and TypeMethodDescriptionvoidKafkaPartitionSplitReader.handleSplitsChanges(org.apache.flink.connector.base.source.reader.splitreader.SplitsChange<KafkaPartitionSplit> splitsChange) voidKafkaPartitionSplitReader.pauseOrResumeSplits(Collection<KafkaPartitionSplit> splitsToPause, Collection<KafkaPartitionSplit> splitsToResume) -
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.source.reader.fetcher
Constructor parameters in org.apache.flink.connector.kafka.source.reader.fetcher with type arguments of type KafkaPartitionSplitModifierConstructorDescriptionKafkaSourceFetcherManager(org.apache.flink.connector.base.source.reader.synchronization.FutureCompletingBlockingQueue<org.apache.flink.connector.base.source.reader.RecordsWithSplitIds<org.apache.kafka.clients.consumer.ConsumerRecord<byte[], byte[]>>> elementsQueue, java.util.function.Supplier<org.apache.flink.connector.base.source.reader.splitreader.SplitReader<org.apache.kafka.clients.consumer.ConsumerRecord<byte[], byte[]>, KafkaPartitionSplit>> splitReaderSupplier, java.util.function.Consumer<Collection<String>> splitFinishedHook) Creates a new SplitFetcherManager with a single I/O threads. -
Uses of KafkaPartitionSplit in org.apache.flink.connector.kafka.source.split
Subclasses of KafkaPartitionSplit in org.apache.flink.connector.kafka.source.splitModifier and TypeClassDescriptionclassThis class extends KafkaPartitionSplit to track a mutable current offset.Methods in org.apache.flink.connector.kafka.source.split that return KafkaPartitionSplitModifier and TypeMethodDescriptionKafkaPartitionSplitSerializer.deserialize(int version, byte[] serialized) KafkaPartitionSplitState.toKafkaPartitionSplit()Use the current offset as the starting offset to create a new KafkaPartitionSplit.Methods in org.apache.flink.connector.kafka.source.split with parameters of type KafkaPartitionSplitModifier and TypeMethodDescriptionbyte[]KafkaPartitionSplitSerializer.serialize(KafkaPartitionSplit split) Constructors in org.apache.flink.connector.kafka.source.split with parameters of type KafkaPartitionSplit