Class FlinkKafkaProducer<IN>

java.lang.Object
org.apache.flink.api.common.functions.AbstractRichFunction
org.apache.flink.streaming.api.functions.sink.RichSinkFunction<IN>
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer<IN>
All Implemented Interfaces:
Serializable, org.apache.flink.api.common.functions.Function, org.apache.flink.api.common.functions.RichFunction, org.apache.flink.api.common.state.CheckpointListener, org.apache.flink.streaming.api.checkpoint.CheckpointedFunction, org.apache.flink.streaming.api.functions.sink.SinkFunction<IN>
Direct Known Subclasses:
FlinkKafkaShuffleProducer

@Deprecated @PublicEvolving public class FlinkKafkaProducer<IN> extends org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
Deprecated.
Please use KafkaSink.
Flink Sink to produce data into a Kafka topic. By default producer will use FlinkKafkaProducer.Semantic.AT_LEAST_ONCE semantic. Before using FlinkKafkaProducer.Semantic.EXACTLY_ONCE please refer to Flink's Kafka connector documentation.
See Also:
  • Field Details

    • SAFE_SCALE_DOWN_FACTOR

      public static final int SAFE_SCALE_DOWN_FACTOR
      Deprecated.
      This coefficient determines what is the safe scale down factor.

      If the Flink application previously failed before first checkpoint completed or we are starting new batch of FlinkKafkaProducer from scratch without clean shutdown of the previous one, FlinkKafkaProducer doesn't know what was the set of previously used Kafka's transactionalId's. In that case, it will try to play safe and abort all of the possible transactionalIds from the range of: [0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize * SAFE_SCALE_DOWN_FACTOR)

      The range of available to use transactional ids is: [0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize)

      This means that if we decrease getNumberOfParallelSubtasks() by a factor larger than SAFE_SCALE_DOWN_FACTOR we can have a left some lingering transaction.

      See Also:
    • DEFAULT_KAFKA_PRODUCERS_POOL_SIZE

      public static final int DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
      Deprecated.
      Default number of KafkaProducers in the pool. See FlinkKafkaProducer.Semantic.EXACTLY_ONCE.
      See Also:
    • DEFAULT_KAFKA_TRANSACTION_TIMEOUT

      public static final org.apache.flink.api.common.time.Time DEFAULT_KAFKA_TRANSACTION_TIMEOUT
      Deprecated.
      Default value for kafka transaction timeout.
    • KEY_DISABLE_METRICS

      public static final String KEY_DISABLE_METRICS
      Deprecated.
      Configuration key for disabling the metrics reporting.
      See Also:
    • producerConfig

      protected final Properties producerConfig
      Deprecated.
      User defined properties for the Producer.
    • defaultTopicId

      protected final String defaultTopicId
      Deprecated.
      The name of the default topic this producer is writing data to.
    • topicPartitionsMap

      protected final Map<String,int[]> topicPartitionsMap
      Deprecated.
      Partitions of each topic.
    • writeTimestampToKafka

      protected boolean writeTimestampToKafka
      Deprecated.
      Flag controlling whether we are writing the Flink record's timestamp into Kafka.
    • semantic

      protected FlinkKafkaProducer.Semantic semantic
      Deprecated.
      Semantic chosen for this instance.
    • callback

      @Nullable protected transient org.apache.kafka.clients.producer.Callback callback
      Deprecated.
      The callback than handles error propagation or logging callbacks.
    • asyncException

      @Nullable protected transient volatile Exception asyncException
      Deprecated.
      Errors encountered in the async producer are stored here.
    • pendingRecords

      protected final AtomicLong pendingRecords
      Deprecated.
      Number of unacknowledged records.
  • Constructor Details

    • FlinkKafkaProducer

      public FlinkKafkaProducer(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
      Deprecated.
      Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.
      Parameters:
      brokerList - Comma separated addresses of the brokers
      topicId - ID of the Kafka topic.
      serializationSchema - User defined (keyless) serialization schema.
    • FlinkKafkaProducer

      public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig)
      Deprecated.
      Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.

      Using this constructor, the default FlinkFixedPartitioner will be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).

      To use a custom partitioner, please use FlinkKafkaProducer(String, SerializationSchema, Properties, Optional) instead.

      Parameters:
      topicId - ID of the Kafka topic.
      serializationSchema - User defined key-less serialization schema.
      producerConfig - Properties with the producer configuration.
    • FlinkKafkaProducer

      public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
      Deprecated.
      Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-less SerializationSchema and possibly a custom FlinkKafkaPartitioner.

      Since a key-less SerializationSchema is used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.

      Parameters:
      topicId - The topic to write data to
      serializationSchema - A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]
      producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
      customPartitioner - A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.
    • FlinkKafkaProducer

      public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
      Deprecated.
      Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-less SerializationSchema and possibly a custom FlinkKafkaPartitioner.

      Since a key-less SerializationSchema is used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.

      Parameters:
      topicId - The topic to write data to
      serializationSchema - A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]
      producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
      customPartitioner - A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.
      semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
      kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).
    • FlinkKafkaProducer

      @Deprecated public FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema)
      Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.

      Using this constructor, the default FlinkFixedPartitioner will be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).

      To use a custom partitioner, please use FlinkKafkaProducer(String, KeyedSerializationSchema, Properties, Optional) instead.

      Parameters:
      brokerList - Comma separated addresses of the brokers
      topicId - ID of the Kafka topic.
      serializationSchema - User defined serialization schema supporting key/value messages
    • FlinkKafkaProducer

      @Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig)
      Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.

      Using this constructor, the default FlinkFixedPartitioner will be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).

      To use a custom partitioner, please use FlinkKafkaProducer(String, KeyedSerializationSchema, Properties, Optional) instead.

      Parameters:
      topicId - ID of the Kafka topic.
      serializationSchema - User defined serialization schema supporting key/value messages
      producerConfig - Properties with the producer configuration.
    • FlinkKafkaProducer

      @Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
      Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.

      Using this constructor, the default FlinkFixedPartitioner will be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).

      Parameters:
      topicId - ID of the Kafka topic.
      serializationSchema - User defined serialization schema supporting key/value messages
      producerConfig - Properties with the producer configuration.
      semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
    • FlinkKafkaProducer

      @Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
      Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a keyed KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.

      If a partitioner is not provided, written records will be partitioned by the attached key of each record (as determined by KeyedSerializationSchema.serializeKey(Object)). If written records do not have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they will be distributed to Kafka partitions in a round-robin fashion.

      Parameters:
      defaultTopicId - The default topic to write data to
      serializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
      producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
      customPartitioner - A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be partitioned by the key of each record (determined by KeyedSerializationSchema.serializeKey(Object)). If the keys are null, then records will be distributed to Kafka partitions in a round-robin fashion.
    • FlinkKafkaProducer

      @Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
      Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a keyed KeyedSerializationSchema and possibly a custom FlinkKafkaPartitioner.

      If a partitioner is not provided, written records will be partitioned by the attached key of each record (as determined by KeyedSerializationSchema.serializeKey(Object)). If written records do not have a key (i.e., KeyedSerializationSchema.serializeKey(Object) returns null), they will be distributed to Kafka partitions in a round-robin fashion.

      Parameters:
      defaultTopicId - The default topic to write data to
      serializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
      producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
      customPartitioner - A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be partitioned by the key of each record (determined by KeyedSerializationSchema.serializeKey(Object)). If the keys are null, then records will be distributed to Kafka partitions in a round-robin fashion.
      semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
      kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).
    • FlinkKafkaProducer

      public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
      Deprecated.
      Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a KafkaSerializationSchema for serializing records to a ProducerRecord, including partitioning information.
      Parameters:
      defaultTopic - The default topic to write data to
      serializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
      producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
      semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
    • FlinkKafkaProducer

      public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
      Deprecated.
      Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a KafkaSerializationSchema and possibly a custom FlinkKafkaPartitioner.
      Parameters:
      defaultTopic - The default topic to write data to
      serializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
      producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
      semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
      kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).
  • Method Details