Class KafkaDynamicSink

java.lang.Object
org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
All Implemented Interfaces:
org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata, org.apache.flink.table.connector.sink.DynamicTableSink

@Internal public class KafkaDynamicSink extends Object implements org.apache.flink.table.connector.sink.DynamicTableSink, org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
A version-agnostic Kafka DynamicTableSink.
  • Nested Class Summary

    Nested classes/interfaces inherited from interface org.apache.flink.table.connector.sink.DynamicTableSink

    org.apache.flink.table.connector.sink.DynamicTableSink.Context, org.apache.flink.table.connector.sink.DynamicTableSink.DataStructureConverter, org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    protected org.apache.flink.table.types.DataType
    Data type of consumed data type.
    protected final SinkBufferFlushMode
    Sink buffer flush config which only supported in upsert mode now.
    protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>>
    Optional format for encoding keys to Kafka.
    protected final String
    Prefix that needs to be removed from fields when constructing the physical data type.
    protected final int[]
    Indices that determine the key fields and the source position in the consumed row.
    protected List<String>
    Metadata that is appended at the end of a physical sink row.
    protected final Integer
    Parallelism of the physical Kafka producer.
    protected final KafkaPartitioner<org.apache.flink.table.data.RowData>
    Partitioner to select Kafka partition for each item.
    protected final org.apache.flink.table.types.DataType
    Data type to configure the formats.
    protected final Properties
    Properties for the Kafka producer.
    protected final Pattern
    The Kafka topic pattern of topics allowed to produce to.
    protected final List<String>
    The Kafka topics to allow for producing.
    protected final boolean
    Flag to determine sink mode.
    protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>>
    Format for encoding values to Kafka.
    protected final int[]
    Indices that determine the value fields and the source position in the consumed row.
  • Constructor Summary

    Constructors
    Constructor
    Description
    KafkaDynamicSink(org.apache.flink.table.types.DataType consumedDataType, org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, Integer parallelism, String transactionalIdPrefix)
     
  • Method Summary

    Modifier and Type
    Method
    Description
    void
    applyWritableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType consumedDataType)
     
     
    org.apache.flink.table.connector.sink.DynamicTableSink
     
    boolean
     
    org.apache.flink.table.connector.ChangelogMode
    getChangelogMode(org.apache.flink.table.connector.ChangelogMode requestedMode)
     
    org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider
    getSinkRuntimeProvider(org.apache.flink.table.connector.sink.DynamicTableSink.Context context)
     
    int
     
    Map<String,org.apache.flink.table.types.DataType>
     

    Methods inherited from class java.lang.Object

    clone, finalize, getClass, notify, notifyAll, toString, wait, wait, wait
  • Field Details

    • metadataKeys

      protected List<String> metadataKeys
      Metadata that is appended at the end of a physical sink row.
    • consumedDataType

      protected org.apache.flink.table.types.DataType consumedDataType
      Data type of consumed data type.
    • physicalDataType

      protected final org.apache.flink.table.types.DataType physicalDataType
      Data type to configure the formats.
    • keyEncodingFormat

      @Nullable protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat
      Optional format for encoding keys to Kafka.
    • valueEncodingFormat

      protected final org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat
      Format for encoding values to Kafka.
    • keyProjection

      protected final int[] keyProjection
      Indices that determine the key fields and the source position in the consumed row.
    • valueProjection

      protected final int[] valueProjection
      Indices that determine the value fields and the source position in the consumed row.
    • keyPrefix

      @Nullable protected final String keyPrefix
      Prefix that needs to be removed from fields when constructing the physical data type.
    • topics

      protected final List<String> topics
      The Kafka topics to allow for producing.
    • topicPattern

      protected final Pattern topicPattern
      The Kafka topic pattern of topics allowed to produce to.
    • properties

      protected final Properties properties
      Properties for the Kafka producer.
    • partitioner

      @Nullable protected final KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner
      Partitioner to select Kafka partition for each item.
    • upsertMode

      protected final boolean upsertMode
      Flag to determine sink mode. In upsert mode sink transforms the delete/update-before message to tombstone message.
    • flushMode

      protected final SinkBufferFlushMode flushMode
      Sink buffer flush config which only supported in upsert mode now.
    • parallelism

      @Nullable protected final Integer parallelism
      Parallelism of the physical Kafka producer. *
  • Constructor Details

    • KafkaDynamicSink

      public KafkaDynamicSink(org.apache.flink.table.types.DataType consumedDataType, org.apache.flink.table.types.DataType physicalDataType, @Nullable org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, @Nullable String keyPrefix, @Nullable List<String> topics, @Nullable Pattern topicPattern, Properties properties, @Nullable KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, @Nullable Integer parallelism, @Nullable String transactionalIdPrefix)
  • Method Details

    • getChangelogMode

      public org.apache.flink.table.connector.ChangelogMode getChangelogMode(org.apache.flink.table.connector.ChangelogMode requestedMode)
      Specified by:
      getChangelogMode in interface org.apache.flink.table.connector.sink.DynamicTableSink
    • getSinkRuntimeProvider

      public org.apache.flink.table.connector.sink.DynamicTableSink.SinkRuntimeProvider getSinkRuntimeProvider(org.apache.flink.table.connector.sink.DynamicTableSink.Context context)
      Specified by:
      getSinkRuntimeProvider in interface org.apache.flink.table.connector.sink.DynamicTableSink
    • listWritableMetadata

      public Map<String,org.apache.flink.table.types.DataType> listWritableMetadata()
      Specified by:
      listWritableMetadata in interface org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
    • applyWritableMetadata

      public void applyWritableMetadata(List<String> metadataKeys, org.apache.flink.table.types.DataType consumedDataType)
      Specified by:
      applyWritableMetadata in interface org.apache.flink.table.connector.sink.abilities.SupportsWritingMetadata
    • copy

      public org.apache.flink.table.connector.sink.DynamicTableSink copy()
      Specified by:
      copy in interface org.apache.flink.table.connector.sink.DynamicTableSink
    • asSummaryString

      public String asSummaryString()
      Specified by:
      asSummaryString in interface org.apache.flink.table.connector.sink.DynamicTableSink
    • equals

      public boolean equals(Object o)
      Overrides:
      equals in class Object
    • hashCode

      public int hashCode()
      Overrides:
      hashCode in class Object