Class KafkaDynamicSource

  • All Implemented Interfaces:
    org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata, org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown, org.apache.flink.table.connector.source.DynamicTableSource, org.apache.flink.table.connector.source.ScanTableSource

    @Internal
    public class KafkaDynamicSource
    extends Object
    implements org.apache.flink.table.connector.source.ScanTableSource, org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata, org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
    A version-agnostic Kafka ScanTableSource.
    • Nested Class Summary

      • Nested classes/interfaces inherited from interface org.apache.flink.table.connector.source.DynamicTableSource

        org.apache.flink.table.connector.source.DynamicTableSource.Context, org.apache.flink.table.connector.source.DynamicTableSource.DataStructureConverter
      • Nested classes/interfaces inherited from interface org.apache.flink.table.connector.source.ScanTableSource

        org.apache.flink.table.connector.source.ScanTableSource.ScanContext, org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider
    • Constructor Summary

      Constructors 
      Constructor Description
      KafkaDynamicSource​(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,​Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,​Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)  
    • Field Detail

      • producedDataType

        protected org.apache.flink.table.types.DataType producedDataType
        Data type that describes the final output of the source.
      • metadataKeys

        protected List<String> metadataKeys
        Metadata that is appended at the end of a physical source row.
      • watermarkStrategy

        @Nullable
        protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy
        Watermark strategy that is used to generate per-partition watermark.
      • physicalDataType

        protected final org.apache.flink.table.types.DataType physicalDataType
        Data type to configure the formats.
      • keyDecodingFormat

        @Nullable
        protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat
        Optional format for decoding keys from Kafka.
      • valueDecodingFormat

        protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat
        Format for decoding values from Kafka.
      • keyProjection

        protected final int[] keyProjection
        Indices that determine the key fields and the target position in the produced row.
      • valueProjection

        protected final int[] valueProjection
        Indices that determine the value fields and the target position in the produced row.
      • keyPrefix

        @Nullable
        protected final String keyPrefix
        Prefix that needs to be removed from fields when constructing the physical data type.
      • topics

        protected final List<String> topics
        The Kafka topics to consume.
      • topicPattern

        protected final Pattern topicPattern
        The Kafka topic pattern to consume.
      • properties

        protected final Properties properties
        Properties for the Kafka consumer.
      • startupTimestampMillis

        protected final long startupTimestampMillis
        The start timestamp to locate partition offsets; only relevant when startup mode is StartupMode.TIMESTAMP.
      • boundedMode

        protected final BoundedMode boundedMode
        The bounded mode for the contained consumer (default is an unbounded data stream).
      • boundedTimestampMillis

        protected final long boundedTimestampMillis
        The bounded timestamp to locate partition offsets; only relevant when bounded mode is BoundedMode.TIMESTAMP.
      • upsertMode

        protected final boolean upsertMode
        Flag to determine source mode. In upsert mode, it will keep the tombstone message. *
      • tableIdentifier

        protected final String tableIdentifier
    • Constructor Detail

      • KafkaDynamicSource

        public KafkaDynamicSource​(org.apache.flink.table.types.DataType physicalDataType,
                                  @Nullable
                                  org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat,
                                  org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat,
                                  int[] keyProjection,
                                  int[] valueProjection,
                                  @Nullable
                                  String keyPrefix,
                                  @Nullable
                                  List<String> topics,
                                  @Nullable
                                  Pattern topicPattern,
                                  Properties properties,
                                  StartupMode startupMode,
                                  Map<KafkaTopicPartition,​Long> specificStartupOffsets,
                                  long startupTimestampMillis,
                                  BoundedMode boundedMode,
                                  Map<KafkaTopicPartition,​Long> specificBoundedOffsets,
                                  long boundedTimestampMillis,
                                  boolean upsertMode,
                                  String tableIdentifier)
    • Method Detail

      • getChangelogMode

        public org.apache.flink.table.connector.ChangelogMode getChangelogMode()
        Specified by:
        getChangelogMode in interface org.apache.flink.table.connector.source.ScanTableSource
      • getScanRuntimeProvider

        public org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider getScanRuntimeProvider​(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context)
        Specified by:
        getScanRuntimeProvider in interface org.apache.flink.table.connector.source.ScanTableSource
      • listReadableMetadata

        public Map<String,​org.apache.flink.table.types.DataType> listReadableMetadata()
        Specified by:
        listReadableMetadata in interface org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
      • applyReadableMetadata

        public void applyReadableMetadata​(List<String> metadataKeys,
                                          org.apache.flink.table.types.DataType producedDataType)
        Specified by:
        applyReadableMetadata in interface org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
      • supportsMetadataProjection

        public boolean supportsMetadataProjection()
        Specified by:
        supportsMetadataProjection in interface org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
      • applyWatermark

        public void applyWatermark​(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy)
        Specified by:
        applyWatermark in interface org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
      • copy

        public org.apache.flink.table.connector.source.DynamicTableSource copy()
        Specified by:
        copy in interface org.apache.flink.table.connector.source.DynamicTableSource
      • asSummaryString

        public String asSummaryString()
        Specified by:
        asSummaryString in interface org.apache.flink.table.connector.source.DynamicTableSource
      • hashCode

        public int hashCode()
        Overrides:
        hashCode in class Object
      • createKafkaSource

        protected KafkaSource<org.apache.flink.table.data.RowData> createKafkaSource​(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization,
                                                                                     org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization,
                                                                                     org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo)