Class KafkaDynamicSource
- java.lang.Object
-
- org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
- All Implemented Interfaces:
org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata,org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown,org.apache.flink.table.connector.source.DynamicTableSource,org.apache.flink.table.connector.source.ScanTableSource
@Internal public class KafkaDynamicSource extends Object implements org.apache.flink.table.connector.source.ScanTableSource, org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata, org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
A version-agnostic KafkaScanTableSource.
-
-
Field Summary
Fields Modifier and Type Field Description protected BoundedModeboundedModeThe bounded mode for the contained consumer (default is an unbounded data stream).protected longboundedTimestampMillisThe bounded timestamp to locate partition offsets; only relevant when bounded mode isBoundedMode.TIMESTAMP.protected org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>>keyDecodingFormatOptional format for decoding keys from Kafka.protected StringkeyPrefixPrefix that needs to be removed from fields when constructing the physical data type.protected int[]keyProjectionIndices that determine the key fields and the target position in the produced row.protected List<String>metadataKeysMetadata that is appended at the end of a physical source row.protected org.apache.flink.table.types.DataTypephysicalDataTypeData type to configure the formats.protected org.apache.flink.table.types.DataTypeproducedDataTypeData type that describes the final output of the source.protected PropertiespropertiesProperties for the Kafka consumer.protected Map<KafkaTopicPartition,Long>specificBoundedOffsetsSpecific end offsets; only relevant when bounded mode isBoundedMode.SPECIFIC_OFFSETS.protected Map<KafkaTopicPartition,Long>specificStartupOffsetsSpecific startup offsets; only relevant when startup mode isStartupMode.SPECIFIC_OFFSETS.protected StartupModestartupModeThe startup mode for the contained consumer (default isStartupMode.GROUP_OFFSETS).protected longstartupTimestampMillisThe start timestamp to locate partition offsets; only relevant when startup mode isStartupMode.TIMESTAMP.protected StringtableIdentifierprotected PatterntopicPatternThe Kafka topic pattern to consume.protected List<String>topicsThe Kafka topics to consume.protected booleanupsertModeFlag to determine source mode.protected org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>>valueDecodingFormatFormat for decoding values from Kafka.protected int[]valueProjectionIndices that determine the value fields and the target position in the produced row.protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData>watermarkStrategyWatermark strategy that is used to generate per-partition watermark.
-
Constructor Summary
Constructors Constructor Description KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)
-
Method Summary
All Methods Instance Methods Concrete Methods Modifier and Type Method Description voidapplyReadableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType producedDataType)voidapplyWatermark(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy)StringasSummaryString()org.apache.flink.table.connector.source.DynamicTableSourcecopy()protected KafkaSource<org.apache.flink.table.data.RowData>createKafkaSource(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization, org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo)booleanequals(Object o)org.apache.flink.table.connector.ChangelogModegetChangelogMode()org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvidergetScanRuntimeProvider(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context)inthashCode()Map<String,org.apache.flink.table.types.DataType>listReadableMetadata()booleansupportsMetadataProjection()
-
-
-
Field Detail
-
producedDataType
protected org.apache.flink.table.types.DataType producedDataType
Data type that describes the final output of the source.
-
metadataKeys
protected List<String> metadataKeys
Metadata that is appended at the end of a physical source row.
-
watermarkStrategy
@Nullable protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy
Watermark strategy that is used to generate per-partition watermark.
-
physicalDataType
protected final org.apache.flink.table.types.DataType physicalDataType
Data type to configure the formats.
-
keyDecodingFormat
@Nullable protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat
Optional format for decoding keys from Kafka.
-
valueDecodingFormat
protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat
Format for decoding values from Kafka.
-
keyProjection
protected final int[] keyProjection
Indices that determine the key fields and the target position in the produced row.
-
valueProjection
protected final int[] valueProjection
Indices that determine the value fields and the target position in the produced row.
-
keyPrefix
@Nullable protected final String keyPrefix
Prefix that needs to be removed from fields when constructing the physical data type.
-
topicPattern
protected final Pattern topicPattern
The Kafka topic pattern to consume.
-
properties
protected final Properties properties
Properties for the Kafka consumer.
-
startupMode
protected final StartupMode startupMode
The startup mode for the contained consumer (default isStartupMode.GROUP_OFFSETS).
-
specificStartupOffsets
protected final Map<KafkaTopicPartition,Long> specificStartupOffsets
Specific startup offsets; only relevant when startup mode isStartupMode.SPECIFIC_OFFSETS.
-
startupTimestampMillis
protected final long startupTimestampMillis
The start timestamp to locate partition offsets; only relevant when startup mode isStartupMode.TIMESTAMP.
-
boundedMode
protected final BoundedMode boundedMode
The bounded mode for the contained consumer (default is an unbounded data stream).
-
specificBoundedOffsets
protected final Map<KafkaTopicPartition,Long> specificBoundedOffsets
Specific end offsets; only relevant when bounded mode isBoundedMode.SPECIFIC_OFFSETS.
-
boundedTimestampMillis
protected final long boundedTimestampMillis
The bounded timestamp to locate partition offsets; only relevant when bounded mode isBoundedMode.TIMESTAMP.
-
upsertMode
protected final boolean upsertMode
Flag to determine source mode. In upsert mode, it will keep the tombstone message. *
-
tableIdentifier
protected final String tableIdentifier
-
-
Constructor Detail
-
KafkaDynamicSource
public KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, @Nullable org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, @Nullable String keyPrefix, @Nullable List<String> topics, @Nullable Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition,Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition,Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)
-
-
Method Detail
-
getChangelogMode
public org.apache.flink.table.connector.ChangelogMode getChangelogMode()
- Specified by:
getChangelogModein interfaceorg.apache.flink.table.connector.source.ScanTableSource
-
getScanRuntimeProvider
public org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider getScanRuntimeProvider(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context)
- Specified by:
getScanRuntimeProviderin interfaceorg.apache.flink.table.connector.source.ScanTableSource
-
listReadableMetadata
public Map<String,org.apache.flink.table.types.DataType> listReadableMetadata()
- Specified by:
listReadableMetadatain interfaceorg.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
-
applyReadableMetadata
public void applyReadableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType producedDataType)
- Specified by:
applyReadableMetadatain interfaceorg.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
-
supportsMetadataProjection
public boolean supportsMetadataProjection()
- Specified by:
supportsMetadataProjectionin interfaceorg.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
-
applyWatermark
public void applyWatermark(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy)
- Specified by:
applyWatermarkin interfaceorg.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
-
copy
public org.apache.flink.table.connector.source.DynamicTableSource copy()
- Specified by:
copyin interfaceorg.apache.flink.table.connector.source.DynamicTableSource
-
asSummaryString
public String asSummaryString()
- Specified by:
asSummaryStringin interfaceorg.apache.flink.table.connector.source.DynamicTableSource
-
createKafkaSource
protected KafkaSource<org.apache.flink.table.data.RowData> createKafkaSource(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization, org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo)
-
-