Class FlinkKafkaProducer<IN>

    • Field Detail

      • SAFE_SCALE_DOWN_FACTOR

        public static final int SAFE_SCALE_DOWN_FACTOR
        Deprecated.
        This coefficient determines what is the safe scale down factor.

        If the Flink application previously failed before first checkpoint completed or we are starting new batch of FlinkKafkaProducer from scratch without clean shutdown of the previous one, FlinkKafkaProducer doesn't know what was the set of previously used Kafka's transactionalId's. In that case, it will try to play safe and abort all of the possible transactionalIds from the range of: [0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize * SAFE_SCALE_DOWN_FACTOR)

        The range of available to use transactional ids is: [0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize)

        This means that if we decrease getNumberOfParallelSubtasks() by a factor larger than SAFE_SCALE_DOWN_FACTOR we can have a left some lingering transaction.

        See Also:
        Constant Field Values
      • DEFAULT_KAFKA_TRANSACTION_TIMEOUT

        public static final org.apache.flink.api.common.time.Time DEFAULT_KAFKA_TRANSACTION_TIMEOUT
        Deprecated.
        Default value for kafka transaction timeout.
      • KEY_DISABLE_METRICS

        public static final String KEY_DISABLE_METRICS
        Deprecated.
        Configuration key for disabling the metrics reporting.
        See Also:
        Constant Field Values
      • producerConfig

        protected final Properties producerConfig
        Deprecated.
        User defined properties for the Producer.
      • defaultTopicId

        protected final String defaultTopicId
        Deprecated.
        The name of the default topic this producer is writing data to.
      • topicPartitionsMap

        protected final Map<String,​int[]> topicPartitionsMap
        Deprecated.
        Partitions of each topic.
      • writeTimestampToKafka

        protected boolean writeTimestampToKafka
        Deprecated.
        Flag controlling whether we are writing the Flink record's timestamp into Kafka.
      • callback

        @Nullable
        protected transient org.apache.kafka.clients.producer.Callback callback
        Deprecated.
        The callback than handles error propagation or logging callbacks.
      • asyncException

        @Nullable
        protected transient volatile Exception asyncException
        Deprecated.
        Errors encountered in the async producer are stored here.
      • pendingRecords

        protected final AtomicLong pendingRecords
        Deprecated.
        Number of unacknowledged records.
    • Constructor Detail

      • FlinkKafkaProducer

        public FlinkKafkaProducer​(String brokerList,
                                  String topicId,
                                  org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
        Deprecated.
        Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.
        Parameters:
        brokerList - Comma separated addresses of the brokers
        topicId - ID of the Kafka topic.
        serializationSchema - User defined (keyless) serialization schema.
      • FlinkKafkaProducer

        public FlinkKafkaProducer​(String topicId,
                                  org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
                                  Properties producerConfig)
        Deprecated.
        Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.

        Using this constructor, the default FlinkFixedPartitioner will be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).

        To use a custom partitioner, please use FlinkKafkaProducer(String, SerializationSchema, Properties, Optional) instead.

        Parameters:
        topicId - ID of the Kafka topic.
        serializationSchema - User defined key-less serialization schema.
        producerConfig - Properties with the producer configuration.
      • FlinkKafkaProducer

        public FlinkKafkaProducer​(String topicId,
                                  org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
                                  Properties producerConfig,
                                  Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
        Deprecated.
        Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-less SerializationSchema and possibly a custom FlinkKafkaPartitioner.

        Since a key-less SerializationSchema is used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.

        Parameters:
        topicId - The topic to write data to
        serializationSchema - A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]
        producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
        customPartitioner - A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.
      • FlinkKafkaProducer

        public FlinkKafkaProducer​(String topicId,
                                  org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema,
                                  Properties producerConfig,
                                  @Nullable
                                  FlinkKafkaPartitioner<IN> customPartitioner,
                                  FlinkKafkaProducer.Semantic semantic,
                                  int kafkaProducersPoolSize)
        Deprecated.
        Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-less SerializationSchema and possibly a custom FlinkKafkaPartitioner.

        Since a key-less SerializationSchema is used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.

        Parameters:
        topicId - The topic to write data to
        serializationSchema - A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]
        producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
        customPartitioner - A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.
        semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
        kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).
      • FlinkKafkaProducer

        public FlinkKafkaProducer​(String defaultTopic,
                                  KafkaSerializationSchema<IN> serializationSchema,
                                  Properties producerConfig,
                                  FlinkKafkaProducer.Semantic semantic)
        Deprecated.
        Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a KafkaSerializationSchema for serializing records to a ProducerRecord, including partitioning information.
        Parameters:
        defaultTopic - The default topic to write data to
        serializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
        producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
        semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
      • FlinkKafkaProducer

        public FlinkKafkaProducer​(String defaultTopic,
                                  KafkaSerializationSchema<IN> serializationSchema,
                                  Properties producerConfig,
                                  FlinkKafkaProducer.Semantic semantic,
                                  int kafkaProducersPoolSize)
        Deprecated.
        Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a KafkaSerializationSchema and possibly a custom FlinkKafkaPartitioner.
        Parameters:
        defaultTopic - The default topic to write data to
        serializationSchema - A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messages
        producerConfig - Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.
        semantic - Defines semantic that will be used by this producer (see FlinkKafkaProducer.Semantic).
        kafkaProducersPoolSize - Overwrite default KafkaProducers pool size (see FlinkKafkaProducer.Semantic.EXACTLY_ONCE).