Class KafkaDynamicSource

java.lang.Object
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
All Implemented Interfaces:
org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata, org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown, org.apache.flink.table.connector.source.DynamicTableSource, org.apache.flink.table.connector.source.ScanTableSource

@Internal public class KafkaDynamicSource extends Object implements org.apache.flink.table.connector.source.ScanTableSource, org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata, org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
A version-agnostic Kafka ScanTableSource.
  • Nested Class Summary

    Nested classes/interfaces inherited from interface org.apache.flink.table.connector.source.DynamicTableSource

    org.apache.flink.table.connector.source.DynamicTableSource.Context, org.apache.flink.table.connector.source.DynamicTableSource.DataStructureConverter

    Nested classes/interfaces inherited from interface org.apache.flink.table.connector.source.ScanTableSource

    org.apache.flink.table.connector.source.ScanTableSource.ScanContext, org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    protected final BoundedMode
    The bounded mode for the contained consumer (default is an unbounded data stream).
    protected final long
    The bounded timestamp to locate partition offsets; only relevant when bounded mode is BoundedMode.TIMESTAMP.
    protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>>
    Optional format for decoding keys from Kafka.
    protected final String
    Prefix that needs to be removed from fields when constructing the physical data type.
    protected final int[]
    Indices that determine the key fields and the target position in the produced row.
    protected List<String>
    Metadata that is appended at the end of a physical source row.
    protected final org.apache.flink.table.types.DataType
    Data type to configure the formats.
    protected org.apache.flink.table.types.DataType
    Data type that describes the final output of the source.
    protected final Properties
    Properties for the Kafka consumer.
    protected final Map<KafkaTopicPartition,Long>
    Specific end offsets; only relevant when bounded mode is BoundedMode.SPECIFIC_OFFSETS.
    protected final Map<KafkaTopicPartition,Long>
    Specific startup offsets; only relevant when startup mode is StartupMode.SPECIFIC_OFFSETS.
    protected final StartupMode
    The startup mode for the contained consumer (default is StartupMode.GROUP_OFFSETS).
    protected final long
    The start timestamp to locate partition offsets; only relevant when startup mode is StartupMode.TIMESTAMP.
    protected final String
     
    protected final Pattern
    The Kafka topic pattern to consume.
    protected final List<String>
    The Kafka topics to consume.
    protected final boolean
    Flag to determine source mode.
    protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>>
    Format for decoding values from Kafka.
    protected final int[]
    Indices that determine the value fields and the target position in the produced row.
    protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData>
    Watermark strategy that is used to generate per-partition watermark.
  • Constructor Summary

    Constructors
    Constructor
    Description
    KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)
     
  • Method Summary

    Modifier and Type
    Method
    Description
    void
    applyReadableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType producedDataType)
     
    void
    applyWatermark(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy)
     
     
    org.apache.flink.table.connector.source.DynamicTableSource
     
    protected KafkaSource<org.apache.flink.table.data.RowData>
    createKafkaSource(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization, org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo)
     
    boolean
     
    org.apache.flink.table.connector.ChangelogMode
     
    org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider
    getScanRuntimeProvider(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context)
     
    int
     
    Map<String,org.apache.flink.table.types.DataType>
     
    boolean
     

    Methods inherited from class java.lang.Object

    clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait
  • Field Details

    • producedDataType

      protected org.apache.flink.table.types.DataType producedDataType
      Data type that describes the final output of the source.
    • metadataKeys

      protected List<String> metadataKeys
      Metadata that is appended at the end of a physical source row.
    • watermarkStrategy

      @Nullable protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy
      Watermark strategy that is used to generate per-partition watermark.
    • physicalDataType

      protected final org.apache.flink.table.types.DataType physicalDataType
      Data type to configure the formats.
    • keyDecodingFormat

      @Nullable protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat
      Optional format for decoding keys from Kafka.
    • valueDecodingFormat

      protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat
      Format for decoding values from Kafka.
    • keyProjection

      protected final int[] keyProjection
      Indices that determine the key fields and the target position in the produced row.
    • valueProjection

      protected final int[] valueProjection
      Indices that determine the value fields and the target position in the produced row.
    • keyPrefix

      @Nullable protected final String keyPrefix
      Prefix that needs to be removed from fields when constructing the physical data type.
    • topics

      protected final List<String> topics
      The Kafka topics to consume.
    • topicPattern

      protected final Pattern topicPattern
      The Kafka topic pattern to consume.
    • properties

      protected final Properties properties
      Properties for the Kafka consumer.
    • startupMode

      protected final StartupMode startupMode
      The startup mode for the contained consumer (default is StartupMode.GROUP_OFFSETS).
    • specificStartupOffsets

      protected final Map<KafkaTopicPartition,Long> specificStartupOffsets
      Specific startup offsets; only relevant when startup mode is StartupMode.SPECIFIC_OFFSETS.
    • startupTimestampMillis

      protected final long startupTimestampMillis
      The start timestamp to locate partition offsets; only relevant when startup mode is StartupMode.TIMESTAMP.
    • boundedMode

      protected final BoundedMode boundedMode
      The bounded mode for the contained consumer (default is an unbounded data stream).
    • specificBoundedOffsets

      protected final Map<KafkaTopicPartition,Long> specificBoundedOffsets
      Specific end offsets; only relevant when bounded mode is BoundedMode.SPECIFIC_OFFSETS.
    • boundedTimestampMillis

      protected final long boundedTimestampMillis
      The bounded timestamp to locate partition offsets; only relevant when bounded mode is BoundedMode.TIMESTAMP.
    • upsertMode

      protected final boolean upsertMode
      Flag to determine source mode. In upsert mode, it will keep the tombstone message. *
    • tableIdentifier

      protected final String tableIdentifier
  • Constructor Details

    • KafkaDynamicSource

      public KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, @Nullable org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, @Nullable String keyPrefix, @Nullable List<String> topics, @Nullable Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)
  • Method Details

    • getChangelogMode

      public org.apache.flink.table.connector.ChangelogMode getChangelogMode()
      Specified by:
      getChangelogMode in interface org.apache.flink.table.connector.source.ScanTableSource
    • getScanRuntimeProvider

      public org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider getScanRuntimeProvider(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context)
      Specified by:
      getScanRuntimeProvider in interface org.apache.flink.table.connector.source.ScanTableSource
    • listReadableMetadata

      public Map<String,org.apache.flink.table.types.DataType> listReadableMetadata()
      Specified by:
      listReadableMetadata in interface org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
    • applyReadableMetadata

      public void applyReadableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType producedDataType)
      Specified by:
      applyReadableMetadata in interface org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
    • supportsMetadataProjection

      public boolean supportsMetadataProjection()
      Specified by:
      supportsMetadataProjection in interface org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
    • applyWatermark

      public void applyWatermark(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy)
      Specified by:
      applyWatermark in interface org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
    • copy

      public org.apache.flink.table.connector.source.DynamicTableSource copy()
      Specified by:
      copy in interface org.apache.flink.table.connector.source.DynamicTableSource
    • asSummaryString

      public String asSummaryString()
      Specified by:
      asSummaryString in interface org.apache.flink.table.connector.source.DynamicTableSource
    • equals

      public boolean equals(Object o)
      Overrides:
      equals in class Object
    • hashCode

      public int hashCode()
      Overrides:
      hashCode in class Object
    • createKafkaSource

      protected KafkaSource<org.apache.flink.table.data.RowData> createKafkaSource(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization, org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo)