Class KafkaDynamicSource
java.lang.Object
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- All Implemented Interfaces:
org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata,org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown,org.apache.flink.table.connector.source.DynamicTableSource,org.apache.flink.table.connector.source.ScanTableSource
@Internal
public class KafkaDynamicSource
extends Object
implements org.apache.flink.table.connector.source.ScanTableSource, org.apache.flink.table.connector.source.abilities.SupportsReadingMetadata, org.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
A version-agnostic Kafka
ScanTableSource.-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.flink.table.connector.source.DynamicTableSource
org.apache.flink.table.connector.source.DynamicTableSource.Context, org.apache.flink.table.connector.source.DynamicTableSource.DataStructureConverterNested classes/interfaces inherited from interface org.apache.flink.table.connector.source.ScanTableSource
org.apache.flink.table.connector.source.ScanTableSource.ScanContext, org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected final BoundedModeThe bounded mode for the contained consumer (default is an unbounded data stream).protected final longThe bounded timestamp to locate partition offsets; only relevant when bounded mode isBoundedMode.TIMESTAMP.protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> Optional format for decoding keys from Kafka.protected final StringPrefix that needs to be removed from fields when constructing the physical data type.protected final int[]Indices that determine the key fields and the target position in the produced row.Metadata that is appended at the end of a physical source row.protected final org.apache.flink.table.types.DataTypeData type to configure the formats.protected org.apache.flink.table.types.DataTypeData type that describes the final output of the source.protected final PropertiesProperties for the Kafka consumer.protected final Map<KafkaTopicPartition, Long> Specific end offsets; only relevant when bounded mode isBoundedMode.SPECIFIC_OFFSETS.protected final Map<KafkaTopicPartition, Long> Specific startup offsets; only relevant when startup mode isStartupMode.SPECIFIC_OFFSETS.protected final StartupModeThe startup mode for the contained consumer (default isStartupMode.GROUP_OFFSETS).protected final longThe start timestamp to locate partition offsets; only relevant when startup mode isStartupMode.TIMESTAMP.protected final Stringprotected final PatternThe Kafka topic pattern to consume.The Kafka topics to consume.protected final booleanFlag to determine source mode.protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> Format for decoding values from Kafka.protected final int[]Indices that determine the value fields and the target position in the produced row.protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> Watermark strategy that is used to generate per-partition watermark. -
Constructor Summary
ConstructorsConstructorDescriptionKafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition, Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition, Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier) -
Method Summary
Modifier and TypeMethodDescriptionvoidapplyReadableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType producedDataType) voidapplyWatermark(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy) org.apache.flink.table.connector.source.DynamicTableSourcecopy()protected KafkaSource<org.apache.flink.table.data.RowData> createKafkaSource(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization, org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo) booleanorg.apache.flink.table.connector.ChangelogModeorg.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvidergetScanRuntimeProvider(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context) inthashCode()boolean
-
Field Details
-
producedDataType
protected org.apache.flink.table.types.DataType producedDataTypeData type that describes the final output of the source. -
metadataKeys
Metadata that is appended at the end of a physical source row. -
watermarkStrategy
@Nullable protected org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategyWatermark strategy that is used to generate per-partition watermark. -
physicalDataType
protected final org.apache.flink.table.types.DataType physicalDataTypeData type to configure the formats. -
keyDecodingFormat
@Nullable protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormatOptional format for decoding keys from Kafka. -
valueDecodingFormat
protected final org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormatFormat for decoding values from Kafka. -
keyProjection
protected final int[] keyProjectionIndices that determine the key fields and the target position in the produced row. -
valueProjection
protected final int[] valueProjectionIndices that determine the value fields and the target position in the produced row. -
keyPrefix
Prefix that needs to be removed from fields when constructing the physical data type. -
topics
The Kafka topics to consume. -
topicPattern
The Kafka topic pattern to consume. -
properties
Properties for the Kafka consumer. -
startupMode
The startup mode for the contained consumer (default isStartupMode.GROUP_OFFSETS). -
specificStartupOffsets
Specific startup offsets; only relevant when startup mode isStartupMode.SPECIFIC_OFFSETS. -
startupTimestampMillis
protected final long startupTimestampMillisThe start timestamp to locate partition offsets; only relevant when startup mode isStartupMode.TIMESTAMP. -
boundedMode
The bounded mode for the contained consumer (default is an unbounded data stream). -
specificBoundedOffsets
Specific end offsets; only relevant when bounded mode isBoundedMode.SPECIFIC_OFFSETS. -
boundedTimestampMillis
protected final long boundedTimestampMillisThe bounded timestamp to locate partition offsets; only relevant when bounded mode isBoundedMode.TIMESTAMP. -
upsertMode
protected final boolean upsertModeFlag to determine source mode. In upsert mode, it will keep the tombstone message. * -
tableIdentifier
-
-
Constructor Details
-
KafkaDynamicSource
public KafkaDynamicSource(org.apache.flink.table.types.DataType physicalDataType, @Nullable org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> keyDecodingFormat, org.apache.flink.table.connector.format.DecodingFormat<org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData>> valueDecodingFormat, int[] keyProjection, int[] valueProjection, @Nullable String keyPrefix, @Nullable List<String> topics, @Nullable Pattern topicPattern, Properties properties, StartupMode startupMode, Map<KafkaTopicPartition, Long> specificStartupOffsets, long startupTimestampMillis, BoundedMode boundedMode, Map<KafkaTopicPartition, Long> specificBoundedOffsets, long boundedTimestampMillis, boolean upsertMode, String tableIdentifier)
-
-
Method Details
-
getChangelogMode
public org.apache.flink.table.connector.ChangelogMode getChangelogMode()- Specified by:
getChangelogModein interfaceorg.apache.flink.table.connector.source.ScanTableSource
-
getScanRuntimeProvider
public org.apache.flink.table.connector.source.ScanTableSource.ScanRuntimeProvider getScanRuntimeProvider(org.apache.flink.table.connector.source.ScanTableSource.ScanContext context) - Specified by:
getScanRuntimeProviderin interfaceorg.apache.flink.table.connector.source.ScanTableSource
-
listReadableMetadata
- Specified by:
listReadableMetadatain interfaceorg.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
-
applyReadableMetadata
public void applyReadableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType producedDataType) - Specified by:
applyReadableMetadatain interfaceorg.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
-
supportsMetadataProjection
public boolean supportsMetadataProjection()- Specified by:
supportsMetadataProjectionin interfaceorg.apache.flink.table.connector.source.abilities.SupportsReadingMetadata
-
applyWatermark
public void applyWatermark(org.apache.flink.api.common.eventtime.WatermarkStrategy<org.apache.flink.table.data.RowData> watermarkStrategy) - Specified by:
applyWatermarkin interfaceorg.apache.flink.table.connector.source.abilities.SupportsWatermarkPushDown
-
copy
public org.apache.flink.table.connector.source.DynamicTableSource copy()- Specified by:
copyin interfaceorg.apache.flink.table.connector.source.DynamicTableSource
-
asSummaryString
- Specified by:
asSummaryStringin interfaceorg.apache.flink.table.connector.source.DynamicTableSource
-
equals
-
hashCode
public int hashCode() -
createKafkaSource
protected KafkaSource<org.apache.flink.table.data.RowData> createKafkaSource(org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> keyDeserialization, org.apache.flink.api.common.serialization.DeserializationSchema<org.apache.flink.table.data.RowData> valueDeserialization, org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo)
-