Uses of Class
org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
-
-
Uses of KafkaTopicPartition in org.apache.flink.streaming.connectors.kafka
Methods in org.apache.flink.streaming.connectors.kafka that return types with arguments of type KafkaTopicPartition Modifier and Type Method Description protected Map<KafkaTopicPartition,Long>FlinkKafkaConsumer. fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions, long timestamp)Deprecated.protected abstract Map<KafkaTopicPartition,Long>FlinkKafkaConsumerBase. fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions, long timestamp)Method parameters in org.apache.flink.streaming.connectors.kafka with type arguments of type KafkaTopicPartition Modifier and Type Method Description protected AbstractFetcher<T,?>FlinkKafkaConsumer. createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext, OffsetCommitMode offsetCommitMode, org.apache.flink.metrics.MetricGroup consumerMetricGroup, boolean useMetrics)Deprecated.protected abstract AbstractFetcher<T,?>FlinkKafkaConsumerBase. createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition,Long> subscribedPartitionsToStartOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext, OffsetCommitMode offsetCommitMode, org.apache.flink.metrics.MetricGroup kafkaMetricGroup, boolean useMetrics)Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and emits it into the data streams.protected Map<KafkaTopicPartition,Long>FlinkKafkaConsumer. fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions, long timestamp)Deprecated.protected abstract Map<KafkaTopicPartition,Long>FlinkKafkaConsumerBase. fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions, long timestamp)FlinkKafkaConsumerBase<T>FlinkKafkaConsumerBase. setStartFromSpecificOffsets(Map<KafkaTopicPartition,Long> specificStartupOffsets)Specifies the consumer to start reading partitions from specific offsets, set independently for each partition. -
Uses of KafkaTopicPartition in org.apache.flink.streaming.connectors.kafka.internals
Methods in org.apache.flink.streaming.connectors.kafka.internals that return KafkaTopicPartition Modifier and Type Method Description KafkaTopicPartitionKafkaTopicPartitionState. getKafkaTopicPartition()Gets Flink's descriptor for the Kafka Partition.KafkaTopicPartitionKafkaTopicPartitionLeader. getTopicPartition()Methods in org.apache.flink.streaming.connectors.kafka.internals that return types with arguments of type KafkaTopicPartition Modifier and Type Method Description List<KafkaTopicPartition>AbstractPartitionDiscoverer. discoverPartitions()Execute a partition discovery attempt for this subtask.static List<KafkaTopicPartition>KafkaTopicPartition. dropLeaderData(List<KafkaTopicPartitionLeader> partitionInfos)protected abstract List<KafkaTopicPartition>AbstractPartitionDiscoverer. getAllPartitionsForTopics(List<String> topics)Fetch the list of all partitions for a specific topics list from Kafka.protected List<KafkaTopicPartition>KafkaPartitionDiscoverer. getAllPartitionsForTopics(List<String> topics)HashMap<KafkaTopicPartition,Long>AbstractFetcher. snapshotCurrentState()Takes a snapshot of the partition offsets.Methods in org.apache.flink.streaming.connectors.kafka.internals with parameters of type KafkaTopicPartition Modifier and Type Method Description static intKafkaTopicPartitionAssigner. assign(KafkaTopicPartition partition, int numParallelSubtasks)Returns the index of the target subtask that a specific Kafka partition should be assigned to.intKafkaTopicPartition.Comparator. compare(KafkaTopicPartition p1, KafkaTopicPartition p2)protected abstract KPHAbstractFetcher. createKafkaPartitionHandle(KafkaTopicPartition partition)Creates the Kafka version specific representation of the given topic partition.org.apache.kafka.common.TopicPartitionKafkaFetcher. createKafkaPartitionHandle(KafkaTopicPartition partition)booleanAbstractPartitionDiscoverer. setAndCheckDiscoveredPartition(KafkaTopicPartition partition)Sets a partition as discovered.Method parameters in org.apache.flink.streaming.connectors.kafka.internals with type arguments of type KafkaTopicPartition Modifier and Type Method Description voidAbstractFetcher. addDiscoveredPartitions(List<KafkaTopicPartition> newPartitions)Adds a list of newly discovered partitions to the fetcher for consuming.voidAbstractFetcher. commitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets, KafkaCommitCallback commitCallback)Commits the given partition offsets to the Kafka brokers (or to ZooKeeper for older Kafka versions).protected abstract voidAbstractFetcher. doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets, KafkaCommitCallback commitCallback)protected voidKafkaFetcher. doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition,Long> offsets, KafkaCommitCallback commitCallback)static StringKafkaTopicPartition. toString(List<KafkaTopicPartition> partitions)static StringKafkaTopicPartition. toString(Map<KafkaTopicPartition,Long> map)Constructors in org.apache.flink.streaming.connectors.kafka.internals with parameters of type KafkaTopicPartition Constructor Description KafkaTopicPartitionLeader(KafkaTopicPartition topicPartition, org.apache.kafka.common.Node leader)KafkaTopicPartitionState(KafkaTopicPartition partition, KPH kafkaPartitionHandle)KafkaTopicPartitionStateWithWatermarkGenerator(KafkaTopicPartition partition, KPH kafkaPartitionHandle, org.apache.flink.api.common.eventtime.TimestampAssigner<T> timestampAssigner, org.apache.flink.api.common.eventtime.WatermarkGenerator<T> watermarkGenerator, org.apache.flink.api.common.eventtime.WatermarkOutput immediateOutput, org.apache.flink.api.common.eventtime.WatermarkOutput deferredOutput)Constructor parameters in org.apache.flink.streaming.connectors.kafka.internals with type arguments of type KafkaTopicPartition Constructor Description AbstractFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition,Long> seedPartitionsWithInitialOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeProvider, long autoWatermarkInterval, ClassLoader userCodeClassLoader, org.apache.flink.metrics.MetricGroup consumerMetricGroup, boolean useMetrics)KafkaFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeProvider, long autoWatermarkInterval, ClassLoader userCodeClassLoader, String taskNameWithSubtasks, KafkaDeserializationSchema<T> deserializer, Properties kafkaProperties, long pollTimeout, org.apache.flink.metrics.MetricGroup subtaskMetricGroup, org.apache.flink.metrics.MetricGroup consumerMetricGroup, boolean useMetrics)KafkaShuffleFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.runtime.tasks.ProcessingTimeService processingTimeProvider, long autoWatermarkInterval, ClassLoader userCodeClassLoader, String taskNameWithSubtasks, KafkaDeserializationSchema<T> deserializer, Properties kafkaProperties, long pollTimeout, org.apache.flink.metrics.MetricGroup subtaskMetricGroup, org.apache.flink.metrics.MetricGroup consumerMetricGroup, boolean useMetrics, org.apache.flink.api.common.typeutils.TypeSerializer<T> typeSerializer, int producerParallelism) -
Uses of KafkaTopicPartition in org.apache.flink.streaming.connectors.kafka.shuffle
Method parameters in org.apache.flink.streaming.connectors.kafka.shuffle with type arguments of type KafkaTopicPartition Modifier and Type Method Description protected AbstractFetcher<T,?>FlinkKafkaShuffleConsumer. createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition,Long> assignedPartitionsWithInitialOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext, OffsetCommitMode offsetCommitMode, org.apache.flink.metrics.MetricGroup consumerMetricGroup, boolean useMetrics) -
Uses of KafkaTopicPartition in org.apache.flink.streaming.connectors.kafka.table
Fields in org.apache.flink.streaming.connectors.kafka.table with type parameters of type KafkaTopicPartition Modifier and Type Field Description protected Map<KafkaTopicPartition,Long>KafkaDynamicSource. specificBoundedOffsetsSpecific end offsets; only relevant when bounded mode isBoundedMode.SPECIFIC_OFFSETS.protected Map<KafkaTopicPartition,Long>KafkaDynamicSource. specificStartupOffsetsSpecific startup offsets; only relevant when startup mode isStartupMode.SPECIFIC_OFFSETS.Method parameters in org.apache.flink.streaming.connectors.kafka.table with type arguments of type KafkaTopicPartition Modifier and Type Method Description protected KafkaDynamicSourceKafkaDynamicTableFactory. createKafkaTableSource(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,Long> specificEndOffsets, long endTimestampMillis, String tableIdentifier)Constructor parameters in org.apache.flink.streaming.connectors.kafka.table with type arguments of type KafkaTopicPartition Constructor Description KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)
-