Class KafkaDynamicSink

  • All Implemented Interfaces:
    org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata, org.apache.flink.table.connector.sink.DynamicTableSink

    @Internal
    public class KafkaDynamicSink
    extends Object
    implements org.apache.flink.table.connector.sink.DynamicTableSink, org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
    A version-agnostic Kafka DynamicTableSink.
    • Nested Class Summary

      • Nested classes/interfaces inherited from interface org.apache.flink.table.connector.sink.DynamicTableSink

        org.apache.flink.table.connector.sink.DynamicTableSink.Context, org.apache.flink.table.connector.sink.DynamicTableSink.DataStructureConverter, org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider
    • Field Summary

      Fields 
      Modifier and Type Field Description
      protected org.apache.flink.table.types.DataType consumedDataType
      Data type of consumed data type.
      protected SinkBufferFlushMode flushMode
      Sink buffer flush config which only supported in upsert mode now.
      protected org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat
      Optional format for encoding keys to Kafka.
      protected String keyPrefix
      Prefix that needs to be removed from fields when constructing the physical data type.
      protected int[] keyProjection
      Indices that determine the key fields and the source position in the consumed row.
      protected List<String> metadataKeys
      Metadata that is appended at the end of a physical sink row.
      protected Integer parallelism
      Parallelism of the physical Kafka producer.
      protected FlinkKafkaPartitioner<org.apache.flink.table.data.RowData> partitioner
      Partitioner to select Kafka partition for each item.
      protected org.apache.flink.table.types.DataType physicalDataType
      Data type to configure the formats.
      protected Properties properties
      Properties for the Kafka producer.
      protected String topic
      The Kafka topic to write to.
      protected boolean upsertMode
      Flag to determine sink mode.
      protected org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat
      Format for encoding values to Kafka.
      protected int[] valueProjection
      Indices that determine the value fields and the source position in the consumed row.
    • Constructor Summary

      Constructors 
      Constructor Description
      KafkaDynamicSink​(org.apache.flink.table.types.DataType consumedDataType, org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, String topic, Properties properties, FlinkKafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, Integer parallelism, String transactionalIdPrefix)  
    • Field Detail

      • metadataKeys

        protected List<String> metadataKeys
        Metadata that is appended at the end of a physical sink row.
      • consumedDataType

        protected org.apache.flink.table.types.DataType consumedDataType
        Data type of consumed data type.
      • physicalDataType

        protected final org.apache.flink.table.types.DataType physicalDataType
        Data type to configure the formats.
      • keyEncodingFormat

        @Nullable
        protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat
        Optional format for encoding keys to Kafka.
      • valueEncodingFormat

        protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat
        Format for encoding values to Kafka.
      • keyProjection

        protected final int[] keyProjection
        Indices that determine the key fields and the source position in the consumed row.
      • valueProjection

        protected final int[] valueProjection
        Indices that determine the value fields and the source position in the consumed row.
      • keyPrefix

        @Nullable
        protected final String keyPrefix
        Prefix that needs to be removed from fields when constructing the physical data type.
      • topic

        protected final String topic
        The Kafka topic to write to.
      • properties

        protected final Properties properties
        Properties for the Kafka producer.
      • partitioner

        @Nullable
        protected final FlinkKafkaPartitioner<org.apache.flink.table.data.RowData> partitioner
        Partitioner to select Kafka partition for each item.
      • upsertMode

        protected final boolean upsertMode
        Flag to determine sink mode. In upsert mode sink transforms the delete/update-before message to tombstone message.
      • flushMode

        protected final SinkBufferFlushMode flushMode
        Sink buffer flush config which only supported in upsert mode now.
      • parallelism

        @Nullable
        protected final Integer parallelism
        Parallelism of the physical Kafka producer. *
    • Constructor Detail

      • KafkaDynamicSink

        public KafkaDynamicSink​(org.apache.flink.table.types.DataType consumedDataType,
                                org.apache.flink.table.types.DataType physicalDataType,
                                @Nullable
                                org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat,
                                org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat,
                                int[] keyProjection,
                                int[] valueProjection,
                                @Nullable
                                String keyPrefix,
                                String topic,
                                Properties properties,
                                @Nullable
                                FlinkKafkaPartitioner<org.apache.flink.table.data.RowData> partitioner,
                                org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee,
                                boolean upsertMode,
                                SinkBufferFlushMode flushMode,
                                @Nullable
                                Integer parallelism,
                                @Nullable
                                String transactionalIdPrefix)
    • Method Detail

      • getChangelogMode

        public org.apache.flink.table.connector.ChangelogMode getChangelogMode​(org.apache.flink.table.connector.ChangelogMode requestedMode)
        Specified by:
        getChangelogMode in interface org.apache.flink.table.connector.sink.DynamicTableSink
      • getSinkRuntimeProvider

        public org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider getSinkRuntimeProvider​(org.apache.flink.table.connector.sink.DynamicTableSink.Context context)
        Specified by:
        getSinkRuntimeProvider in interface org.apache.flink.table.connector.sink.DynamicTableSink
      • listWritableMetadata

        public Map<String,​org.apache.flink.table.types.DataType> listWritableMetadata()
        Specified by:
        listWritableMetadata in interface org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
      • applyWritableMetadata

        public void applyWritableMetadata​(List<String> metadataKeys,
                                          org.apache.flink.table.types.DataType consumedDataType)
        Specified by:
        applyWritableMetadata in interface org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
      • copy

        public org.apache.flink.table.connector.sink.DynamicTableSink copy()
        Specified by:
        copy in interface org.apache.flink.table.connector.sink.DynamicTableSink
      • asSummaryString

        public String asSummaryString()
        Specified by:
        asSummaryString in interface org.apache.flink.table.connector.sink.DynamicTableSink
      • hashCode

        public int hashCode()
        Overrides:
        hashCode in class Object