Class FlinkKafkaConsumerBase<T>
- Type Parameters:
T- The type of records produced by this data source
- All Implemented Interfaces:
Serializable,org.apache.flink.api.common.functions.Function,org.apache.flink.api.common.functions.RichFunction,org.apache.flink.api.common.state.CheckpointListener,org.apache.flink.api.java.typeutils.ResultTypeQueryable<T>,org.apache.flink.streaming.api.checkpoint.CheckpointedFunction,org.apache.flink.streaming.api.functions.source.ParallelSourceFunction<T>,org.apache.flink.streaming.api.functions.source.SourceFunction<T>
- Direct Known Subclasses:
FlinkKafkaConsumer
The Kafka version specific behavior is defined mainly in the specific subclasses of the AbstractFetcher.
- See Also:
-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.flink.streaming.api.functions.source.SourceFunction
org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected final KafkaDeserializationSchema<T> Deprecated.The schema to convert between Kafka's byte messages, and Flink's objects.static final StringDeprecated.Boolean configuration key to disable metrics tracking.static final StringDeprecated.Configuration key to define the consumer's partition discovery interval, in milliseconds.protected static final org.slf4j.LoggerDeprecated.static final intDeprecated.The maximum number of pending non-committed checkpoints to track, to avoid memory leaks.static final longDeprecated.The default interval to execute partition discovery, in milliseconds (Long.MIN_VALUE, i.e. -
Constructor Summary
ConstructorsConstructorDescriptionFlinkKafkaConsumerBase(List<String> topics, Pattern topicPattern, KafkaDeserializationSchema<T> deserializer, long discoveryIntervalMillis, boolean useMetrics) Deprecated.Base constructor. -
Method Summary
Modifier and TypeMethodDescriptionprotected static voidadjustAutoCommitConfig(Properties properties, OffsetCommitMode offsetCommitMode) Deprecated.Make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS.assignTimestampsAndWatermarks(org.apache.flink.api.common.eventtime.WatermarkStrategy<T> watermarkStrategy) Deprecated.Sets the givenWatermarkStrategyon this consumer.assignTimestampsAndWatermarks(org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T> assigner) Deprecated.assignTimestampsAndWatermarks(org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T> assigner) Deprecated.Please useassignTimestampsAndWatermarks(WatermarkStrategy)instead.voidcancel()Deprecated.voidclose()Deprecated.protected abstract AbstractFetcher<T, ?> createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition, Long> subscribedPartitionsToStartOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext, OffsetCommitMode offsetCommitMode, org.apache.flink.metrics.MetricGroup kafkaMetricGroup, boolean useMetrics) Deprecated.Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and emits it into the data streams.protected abstract AbstractPartitionDiscoverercreatePartitionDiscoverer(KafkaTopicsDescriptor topicsDescriptor, int indexOfThisSubtask, int numParallelSubtasks) Deprecated.Creates the partition discoverer that is used to find new partitions for this subtask.Deprecated.By default, when restoring from a checkpoint / savepoint, the consumer always ignores restored partitions that are no longer associated with the current specified topics or topic pattern to subscribe to.protected abstract Map<KafkaTopicPartition, Long> fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions, long timestamp) Deprecated.booleanDeprecated.protected abstract booleanDeprecated.org.apache.flink.api.common.typeinfo.TypeInformation<T> Deprecated.final voidinitializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) Deprecated.voidnotifyCheckpointAborted(long checkpointId) Deprecated.final voidnotifyCheckpointComplete(long checkpointId) Deprecated.voidopen(org.apache.flink.configuration.Configuration configuration) Deprecated.voidDeprecated.setCommitOffsetsOnCheckpoints(boolean commitOnCheckpoints) Deprecated.Specifies whether or not the consumer should commit offsets back to Kafka on checkpoints.Deprecated.Specifies the consumer to start reading from the earliest offset for all partitions.Deprecated.Specifies the consumer to start reading from any committed group offsets found in Zookeeper / Kafka brokers.Deprecated.Specifies the consumer to start reading from the latest offset for all partitions.setStartFromSpecificOffsets(Map<KafkaTopicPartition, Long> specificStartupOffsets) Deprecated.Specifies the consumer to start reading partitions from specific offsets, set independently for each partition.setStartFromTimestamp(long startupOffsetsTimestamp) Deprecated.Specifies the consumer to start reading partitions from a specified timestamp.final voidsnapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context) Deprecated.Methods inherited from class org.apache.flink.api.common.functions.AbstractRichFunction
getIterationRuntimeContext, getRuntimeContext, setRuntimeContextMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.flink.api.common.functions.RichFunction
open
-
Field Details
-
LOG
protected static final org.slf4j.Logger LOGDeprecated. -
MAX_NUM_PENDING_CHECKPOINTS
public static final int MAX_NUM_PENDING_CHECKPOINTSDeprecated.The maximum number of pending non-committed checkpoints to track, to avoid memory leaks.- See Also:
-
PARTITION_DISCOVERY_DISABLED
public static final long PARTITION_DISCOVERY_DISABLEDDeprecated.The default interval to execute partition discovery, in milliseconds (Long.MIN_VALUE, i.e. disabled by default).- See Also:
-
KEY_DISABLE_METRICS
Deprecated.Boolean configuration key to disable metrics tracking. *- See Also:
-
KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS
Deprecated.Configuration key to define the consumer's partition discovery interval, in milliseconds.- See Also:
-
deserializer
Deprecated.The schema to convert between Kafka's byte messages, and Flink's objects.
-
-
Constructor Details
-
FlinkKafkaConsumerBase
public FlinkKafkaConsumerBase(List<String> topics, Pattern topicPattern, KafkaDeserializationSchema<T> deserializer, long discoveryIntervalMillis, boolean useMetrics) Deprecated.Base constructor.- Parameters:
topics- fixed list of topics to subscribe to (null, if using topic pattern)topicPattern- the topic pattern to subscribe to (null, if using fixed topics)deserializer- The deserializer to turn raw byte messages into Java/Scala objects.discoveryIntervalMillis- the topic / partition discovery interval, in milliseconds (0 if discovery is disabled).
-
-
Method Details
-
adjustAutoCommitConfig
protected static void adjustAutoCommitConfig(Properties properties, OffsetCommitMode offsetCommitMode) Deprecated.Make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS. This overwrites whatever setting the user configured in the properties.- Parameters:
properties- - Kafka configuration properties to be adjustedoffsetCommitMode- offset commit mode
-
assignTimestampsAndWatermarks
public FlinkKafkaConsumerBase<T> assignTimestampsAndWatermarks(org.apache.flink.api.common.eventtime.WatermarkStrategy<T> watermarkStrategy) Deprecated.Sets the givenWatermarkStrategyon this consumer. These will be used to assign timestamps to records and generates watermarks to signal event time progress.Running timestamp extractors / watermark generators directly inside the Kafka source (which you can do by using this method), per Kafka partition, allows users to let them exploit the per-partition characteristics.
When a subtask of a FlinkKafkaConsumer source reads multiple Kafka partitions, the streams from the partitions are unioned in a "first come first serve" fashion. Per-partition characteristics are usually lost that way. For example, if the timestamps are strictly ascending per Kafka partition, they will not be strictly ascending in the resulting Flink DataStream, if the parallel source subtask reads more than one partition.
Common watermark generation patterns can be found as static methods in the
WatermarkStrategyclass.- Returns:
- The consumer object, to allow function chaining.
-
assignTimestampsAndWatermarks
@Deprecated public FlinkKafkaConsumerBase<T> assignTimestampsAndWatermarks(org.apache.flink.streaming.api.functions.AssignerWithPunctuatedWatermarks<T> assigner) Deprecated.Please useassignTimestampsAndWatermarks(WatermarkStrategy)instead.Specifies anAssignerWithPunctuatedWatermarksto emit watermarks in a punctuated manner. The watermark extractor will run per Kafka partition, watermarks will be merged across partitions in the same way as in the Flink runtime, when streams are merged.When a subtask of a FlinkKafkaConsumer source reads multiple Kafka partitions, the streams from the partitions are unioned in a "first come first serve" fashion. Per-partition characteristics are usually lost that way. For example, if the timestamps are strictly ascending per Kafka partition, they will not be strictly ascending in the resulting Flink DataStream, if the parallel source subtask reads more than one partition.
Running timestamp extractors / watermark generators directly inside the Kafka source, per Kafka partition, allows users to let them exploit the per-partition characteristics.
Note: One can use either an
AssignerWithPunctuatedWatermarksor anAssignerWithPeriodicWatermarks, not both at the same time.This method uses the deprecated watermark generator interfaces. Please switch to
assignTimestampsAndWatermarks(WatermarkStrategy)to use the new interfaces instead. The new interfaces support watermark idleness and no longer need to differentiate between "periodic" and "punctuated" watermarks.- Parameters:
assigner- The timestamp assigner / watermark generator to use.- Returns:
- The consumer object, to allow function chaining.
-
assignTimestampsAndWatermarks
@Deprecated public FlinkKafkaConsumerBase<T> assignTimestampsAndWatermarks(org.apache.flink.streaming.api.functions.AssignerWithPeriodicWatermarks<T> assigner) Deprecated.Please useassignTimestampsAndWatermarks(WatermarkStrategy)instead.Specifies anAssignerWithPunctuatedWatermarksto emit watermarks in a punctuated manner. The watermark extractor will run per Kafka partition, watermarks will be merged across partitions in the same way as in the Flink runtime, when streams are merged.When a subtask of a FlinkKafkaConsumer source reads multiple Kafka partitions, the streams from the partitions are unioned in a "first come first serve" fashion. Per-partition characteristics are usually lost that way. For example, if the timestamps are strictly ascending per Kafka partition, they will not be strictly ascending in the resulting Flink DataStream, if the parallel source subtask reads more that one partition.
Running timestamp extractors / watermark generators directly inside the Kafka source, per Kafka partition, allows users to let them exploit the per-partition characteristics.
Note: One can use either an
AssignerWithPunctuatedWatermarksor anAssignerWithPeriodicWatermarks, not both at the same time.This method uses the deprecated watermark generator interfaces. Please switch to
assignTimestampsAndWatermarks(WatermarkStrategy)to use the new interfaces instead. The new interfaces support watermark idleness and no longer need to differentiate between "periodic" and "punctuated" watermarks.- Parameters:
assigner- The timestamp assigner / watermark generator to use.- Returns:
- The consumer object, to allow function chaining.
-
setCommitOffsetsOnCheckpoints
Deprecated.Specifies whether or not the consumer should commit offsets back to Kafka on checkpoints.This setting will only have effect if checkpointing is enabled for the job. If checkpointing isn't enabled, only the "auto.commit.enable" (for 0.8) / "enable.auto.commit" (for 0.9+) property settings will be used.
- Returns:
- The consumer object, to allow function chaining.
-
setStartFromEarliest
Deprecated.Specifies the consumer to start reading from the earliest offset for all partitions. This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers.This method does not affect where partitions are read from when the consumer is restored from a checkpoint or savepoint. When the consumer is restored from a checkpoint or savepoint, only the offsets in the restored state will be used.
- Returns:
- The consumer object, to allow function chaining.
-
setStartFromLatest
Deprecated.Specifies the consumer to start reading from the latest offset for all partitions. This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers.This method does not affect where partitions are read from when the consumer is restored from a checkpoint or savepoint. When the consumer is restored from a checkpoint or savepoint, only the offsets in the restored state will be used.
- Returns:
- The consumer object, to allow function chaining.
-
setStartFromTimestamp
Deprecated.Specifies the consumer to start reading partitions from a specified timestamp. The specified timestamp must be before the current timestamp. This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers.The consumer will look up the earliest offset whose timestamp is greater than or equal to the specific timestamp from Kafka. If there's no such offset, the consumer will use the latest offset to read data from kafka.
This method does not affect where partitions are read from when the consumer is restored from a checkpoint or savepoint. When the consumer is restored from a checkpoint or savepoint, only the offsets in the restored state will be used.
- Parameters:
startupOffsetsTimestamp- timestamp for the startup offsets, as milliseconds from epoch.- Returns:
- The consumer object, to allow function chaining.
-
setStartFromGroupOffsets
Deprecated.Specifies the consumer to start reading from any committed group offsets found in Zookeeper / Kafka brokers. The "group.id" property must be set in the configuration properties. If no offset can be found for a partition, the behaviour in "auto.offset.reset" set in the configuration properties will be used for the partition.This method does not affect where partitions are read from when the consumer is restored from a checkpoint or savepoint. When the consumer is restored from a checkpoint or savepoint, only the offsets in the restored state will be used.
- Returns:
- The consumer object, to allow function chaining.
-
setStartFromSpecificOffsets
public FlinkKafkaConsumerBase<T> setStartFromSpecificOffsets(Map<KafkaTopicPartition, Long> specificStartupOffsets) Deprecated.Specifies the consumer to start reading partitions from specific offsets, set independently for each partition. The specified offset should be the offset of the next record that will be read from partitions. This lets the consumer ignore any committed group offsets in Zookeeper / Kafka brokers.If the provided map of offsets contains entries whose
KafkaTopicPartitionis not subscribed by the consumer, the entry will be ignored. If the consumer subscribes to a partition that does not exist in the provided map of offsets, the consumer will fallback to the default group offset behaviour (seesetStartFromGroupOffsets()) for that particular partition.If the specified offset for a partition is invalid, or the behaviour for that partition is defaulted to group offsets but still no group offset could be found for it, then the "auto.offset.reset" behaviour set in the configuration properties will be used for the partition
This method does not affect where partitions are read from when the consumer is restored from a checkpoint or savepoint. When the consumer is restored from a checkpoint or savepoint, only the offsets in the restored state will be used.
- Returns:
- The consumer object, to allow function chaining.
-
disableFilterRestoredPartitionsWithSubscribedTopics
Deprecated.By default, when restoring from a checkpoint / savepoint, the consumer always ignores restored partitions that are no longer associated with the current specified topics or topic pattern to subscribe to.This method configures the consumer to not filter the restored partitions, therefore always attempting to consume whatever partition was present in the previous execution regardless of the specified topics to subscribe to in the current execution.
- Returns:
- The consumer object, to allow function chaining.
-
open
Deprecated.- Specified by:
openin interfaceorg.apache.flink.api.common.functions.RichFunction- Overrides:
openin classorg.apache.flink.api.common.functions.AbstractRichFunction- Throws:
Exception
-
run
public void run(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext) throws Exception Deprecated. -
cancel
public void cancel()Deprecated.- Specified by:
cancelin interfaceorg.apache.flink.streaming.api.functions.source.SourceFunction<T>
-
close
Deprecated.- Specified by:
closein interfaceorg.apache.flink.api.common.functions.RichFunction- Overrides:
closein classorg.apache.flink.api.common.functions.AbstractRichFunction- Throws:
Exception
-
initializeState
public final void initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) throws Exception Deprecated.- Specified by:
initializeStatein interfaceorg.apache.flink.streaming.api.checkpoint.CheckpointedFunction- Throws:
Exception
-
snapshotState
public final void snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context) throws Exception Deprecated.- Specified by:
snapshotStatein interfaceorg.apache.flink.streaming.api.checkpoint.CheckpointedFunction- Throws:
Exception
-
notifyCheckpointComplete
Deprecated.- Specified by:
notifyCheckpointCompletein interfaceorg.apache.flink.api.common.state.CheckpointListener- Throws:
Exception
-
notifyCheckpointAborted
public void notifyCheckpointAborted(long checkpointId) Deprecated.- Specified by:
notifyCheckpointAbortedin interfaceorg.apache.flink.api.common.state.CheckpointListener
-
createFetcher
protected abstract AbstractFetcher<T,?> createFetcher(org.apache.flink.streaming.api.functions.source.SourceFunction.SourceContext<T> sourceContext, Map<KafkaTopicPartition, Long> subscribedPartitionsToStartOffsets, org.apache.flink.util.SerializedValue<org.apache.flink.api.common.eventtime.WatermarkStrategy<T>> watermarkStrategy, org.apache.flink.streaming.api.operators.StreamingRuntimeContext runtimeContext, OffsetCommitMode offsetCommitMode, org.apache.flink.metrics.MetricGroup kafkaMetricGroup, boolean useMetrics) throws ExceptionDeprecated.Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and emits it into the data streams.- Parameters:
sourceContext- The source context to emit data to.subscribedPartitionsToStartOffsets- The set of partitions that this subtask should handle, with their start offsets.watermarkStrategy- Optional, a serialized WatermarkStrategy.runtimeContext- The task's runtime context.- Returns:
- The instantiated fetcher
- Throws:
Exception- The method should forward exceptions
-
createPartitionDiscoverer
protected abstract AbstractPartitionDiscoverer createPartitionDiscoverer(KafkaTopicsDescriptor topicsDescriptor, int indexOfThisSubtask, int numParallelSubtasks) Deprecated.Creates the partition discoverer that is used to find new partitions for this subtask.- Parameters:
topicsDescriptor- Descriptor that describes whether we are discovering partitions for fixed topics or a topic pattern.indexOfThisSubtask- The index of this consumer subtask.numParallelSubtasks- The total number of parallel consumer subtasks.- Returns:
- The instantiated partition discoverer
-
getIsAutoCommitEnabled
protected abstract boolean getIsAutoCommitEnabled()Deprecated. -
fetchOffsetsWithTimestamp
protected abstract Map<KafkaTopicPartition,Long> fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition> partitions, long timestamp) Deprecated. -
getProducedType
Deprecated.- Specified by:
getProducedTypein interfaceorg.apache.flink.api.java.typeutils.ResultTypeQueryable<T>
-
getEnableCommitOnCheckpoints
@VisibleForTesting public boolean getEnableCommitOnCheckpoints()Deprecated.
-
assignTimestampsAndWatermarks(WatermarkStrategy)instead.