Class KafkaDynamicSink
java.lang.Object
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- All Implemented Interfaces:
org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata,org.apache.flink.table.connector.sink.DynamicTableSink
@Internal
public class KafkaDynamicSink
extends Object
implements org.apache.flink.table.connector.sink.DynamicTableSink, org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
A version-agnostic Kafka
DynamicTableSink.-
Nested Class Summary
Nested classes/interfaces inherited from interface org.apache.flink.table.connector.sink.DynamicTableSink
org.apache.flink.table.connector.sink.DynamicTableSink.Context, org.apache.flink.table.connector.sink.DynamicTableSink.DataStructureConverter, org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected org.apache.flink.table.types.DataTypeData type of consumed data type.protected final SinkBufferFlushModeSink buffer flush config which only supported in upsert mode now.protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> Optional format for encoding keys to Kafka.protected final StringPrefix that needs to be removed from fields when constructing the physical data type.protected final int[]Indices that determine the key fields and the source position in the consumed row.Metadata that is appended at the end of a physical sink row.protected final IntegerParallelism of the physical Kafka producer.protected final KafkaPartitioner<org.apache.flink.table.data.RowData> Partitioner to select Kafka partition for each item.protected final org.apache.flink.table.types.DataTypeData type to configure the formats.protected final PropertiesProperties for the Kafka producer.protected final PatternThe Kafka topic pattern of topics allowed to produce to.The Kafka topics to allow for producing.protected final booleanFlag to determine sink mode.protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> Format for encoding values to Kafka.protected final int[]Indices that determine the value fields and the source position in the consumed row. -
Constructor Summary
ConstructorsConstructorDescriptionKafkaDynamicSink(org.apache.flink.table.types.DataType consumedDataType, org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, Integer parallelism, String transactionalIdPrefix) -
Method Summary
Modifier and TypeMethodDescriptionvoidapplyWritableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType consumedDataType) org.apache.flink.table.connector.sink.DynamicTableSinkcopy()booleanorg.apache.flink.table.connector.ChangelogModegetChangelogMode(org.apache.flink.table.connector.ChangelogMode requestedMode) org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvidergetSinkRuntimeProvider(org.apache.flink.table.connector.sink.DynamicTableSink.Context context) inthashCode()
-
Field Details
-
metadataKeys
Metadata that is appended at the end of a physical sink row. -
consumedDataType
protected org.apache.flink.table.types.DataType consumedDataTypeData type of consumed data type. -
physicalDataType
protected final org.apache.flink.table.types.DataType physicalDataTypeData type to configure the formats. -
keyEncodingFormat
@Nullable protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormatOptional format for encoding keys to Kafka. -
valueEncodingFormat
protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormatFormat for encoding values to Kafka. -
keyProjection
protected final int[] keyProjectionIndices that determine the key fields and the source position in the consumed row. -
valueProjection
protected final int[] valueProjectionIndices that determine the value fields and the source position in the consumed row. -
keyPrefix
Prefix that needs to be removed from fields when constructing the physical data type. -
topics
The Kafka topics to allow for producing. -
topicPattern
The Kafka topic pattern of topics allowed to produce to. -
properties
Properties for the Kafka producer. -
partitioner
Partitioner to select Kafka partition for each item. -
upsertMode
protected final boolean upsertModeFlag to determine sink mode. In upsert mode sink transforms the delete/update-before message to tombstone message. -
flushMode
Sink buffer flush config which only supported in upsert mode now. -
parallelism
Parallelism of the physical Kafka producer. *
-
-
Constructor Details
-
KafkaDynamicSink
public KafkaDynamicSink(org.apache.flink.table.types.DataType consumedDataType, org.apache.flink.table.types.DataType physicalDataType, @Nullable org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, @Nullable String keyPrefix, @Nullable List<String> topics, @Nullable Pattern topicPattern, Properties properties, @Nullable KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, @Nullable Integer parallelism, @Nullable String transactionalIdPrefix)
-
-
Method Details
-
getChangelogMode
public org.apache.flink.table.connector.ChangelogMode getChangelogMode(org.apache.flink.table.connector.ChangelogMode requestedMode) - Specified by:
getChangelogModein interfaceorg.apache.flink.table.connector.sink.DynamicTableSink
-
getSinkRuntimeProvider
public org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider getSinkRuntimeProvider(org.apache.flink.table.connector.sink.DynamicTableSink.Context context) - Specified by:
getSinkRuntimeProviderin interfaceorg.apache.flink.table.connector.sink.DynamicTableSink
-
listWritableMetadata
- Specified by:
listWritableMetadatain interfaceorg.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
-
applyWritableMetadata
public void applyWritableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType consumedDataType) - Specified by:
applyWritableMetadatain interfaceorg.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
-
copy
public org.apache.flink.table.connector.sink.DynamicTableSink copy()- Specified by:
copyin interfaceorg.apache.flink.table.connector.sink.DynamicTableSink
-
asSummaryString
- Specified by:
asSummaryStringin interfaceorg.apache.flink.table.connector.sink.DynamicTableSink
-
equals
-
hashCode
public int hashCode()
-