Uses of Interface
org.apache.flink.connector.kafka.sink.KafkaPartitioner
-
Packages that use KafkaPartitioner Package Description org.apache.flink.connector.kafka.sink org.apache.flink.streaming.connectors.kafka.partitioner org.apache.flink.streaming.connectors.kafka.table -
-
Uses of KafkaPartitioner in org.apache.flink.connector.kafka.sink
Methods in org.apache.flink.connector.kafka.sink with parameters of type KafkaPartitioner Modifier and Type Method Description <T extends IN>
KafkaRecordSerializationSchemaBuilder<T>KafkaRecordSerializationSchemaBuilder. setPartitioner(KafkaPartitioner<? super T> partitioner)Sets a custom partitioner determining the target partition of the target topic. -
Uses of KafkaPartitioner in org.apache.flink.streaming.connectors.kafka.partitioner
Classes in org.apache.flink.streaming.connectors.kafka.partitioner that implement KafkaPartitioner Modifier and Type Class Description classFlinkFixedPartitioner<T>Deprecated.Will be turned into internal class whenFlinkKafkaProduceris removed.classFlinkKafkaPartitioner<T>Deprecated.UseKafkaPartitionerinstead forKafkaSink. -
Uses of KafkaPartitioner in org.apache.flink.streaming.connectors.kafka.table
Fields in org.apache.flink.streaming.connectors.kafka.table declared as KafkaPartitioner Modifier and Type Field Description protected KafkaPartitioner<org.apache.flink.table.data.RowData>KafkaDynamicSink. partitionerPartitioner to select Kafka partition for each item.Methods in org.apache.flink.streaming.connectors.kafka.table with parameters of type KafkaPartitioner Modifier and Type Method Description protected KafkaDynamicSinkKafkaDynamicTableFactory. createKafkaTableSink(org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, Integer parallelism, String transactionalIdPrefix)Constructors in org.apache.flink.streaming.connectors.kafka.table with parameters of type KafkaPartitioner Constructor Description KafkaDynamicSink(org.apache.flink.table.types.DataType consumedDataType, org.apache.flink.table.types.DataType physicalDataType, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> keyEncodingFormat, org.apache.flink.table.connector.format.EncodingFormat<org.apache.flink.api.common.serialization.SerializationSchema<org.apache.flink.table.data.RowData>> valueEncodingFormat, int[] keyProjection, int[] valueProjection, String keyPrefix, List<String> topics, Pattern topicPattern, Properties properties, KafkaPartitioner<org.apache.flink.table.data.RowData> partitioner, org.apache.flink.connector.base.DeliveryGuarantee deliveryGuarantee, boolean upsertMode, SinkBufferFlushMode flushMode, Integer parallelism, String transactionalIdPrefix)
-