Class FlinkKafkaProducer<IN>
- java.lang.Object
-
- org.apache.flink.api.common.functions.AbstractRichFunction
-
- org.apache.flink.streaming.api.functions.sink.RichSinkFunction<IN>
-
- org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
- org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer<IN>
-
- All Implemented Interfaces:
Serializable,org.apache.flink.api.common.functions.Function,org.apache.flink.api.common.functions.RichFunction,org.apache.flink.api.common.state.CheckpointListener,org.apache.flink.streaming.api.checkpoint.CheckpointedFunction,org.apache.flink.streaming.api.functions.sink.SinkFunction<IN>
- Direct Known Subclasses:
FlinkKafkaShuffleProducer
@Deprecated @PublicEvolving public class FlinkKafkaProducer<IN> extends org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
Deprecated.Please useKafkaSink.Flink Sink to produce data into a Kafka topic. By default producer will useFlinkKafkaProducer.Semantic.AT_LEAST_ONCEsemantic. Before usingFlinkKafkaProducer.Semantic.EXACTLY_ONCEplease refer to Flink's Kafka connector documentation.- See Also:
- Serialized Form
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classFlinkKafkaProducer.ContextStateSerializerDeprecated.TypeSerializerforFlinkKafkaProducer.KafkaTransactionContext.static classFlinkKafkaProducer.KafkaTransactionContextDeprecated.Context associated to this instance of theFlinkKafkaProducer.static classFlinkKafkaProducer.KafkaTransactionStateDeprecated.State for handling transactions.static classFlinkKafkaProducer.NextTransactionalIdHintDeprecated.Keep information required to deduce next safe to use transactional id.static classFlinkKafkaProducer.NextTransactionalIdHintSerializerDeprecated.TypeSerializerforFlinkKafkaProducer.NextTransactionalIdHint.static classFlinkKafkaProducer.SemanticDeprecated.Semantics that can be chosen.static classFlinkKafkaProducer.TransactionStateSerializerDeprecated.TypeSerializerforFlinkKafkaProducer.KafkaTransactionState.-
Nested classes/interfaces inherited from class org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.State<TXN extends Object,CONTEXT extends Object>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializer<TXN extends Object,CONTEXT extends Object>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializerSnapshot<TXN extends Object,CONTEXT extends Object>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.TransactionHolder<TXN extends Object>
-
-
Field Summary
Fields Modifier and Type Field Description protected ExceptionasyncExceptionDeprecated.Errors encountered in the async producer are stored here.protected org.apache.kafka.clients.producer.CallbackcallbackDeprecated.The callback than handles error propagation or logging callbacks.static intDEFAULT_KAFKA_PRODUCERS_POOL_SIZEDeprecated.Default number of KafkaProducers in the pool.static org.apache.flink.api.common.time.TimeDEFAULT_KAFKA_TRANSACTION_TIMEOUTDeprecated.Default value for kafka transaction timeout.protected StringdefaultTopicIdDeprecated.The name of the default topic this producer is writing data to.static StringKEY_DISABLE_METRICSDeprecated.Configuration key for disabling the metrics reporting.protected AtomicLongpendingRecordsDeprecated.Number of unacknowledged records.protected PropertiesproducerConfigDeprecated.User defined properties for the Producer.static intSAFE_SCALE_DOWN_FACTORDeprecated.This coefficient determines what is the safe scale down factor.protected FlinkKafkaProducer.SemanticsemanticDeprecated.Semantic chosen for this instance.protected Map<String,int[]>topicPartitionsMapDeprecated.Partitions of each topic.protected booleanwriteTimestampToKafkaDeprecated.Flag controlling whether we are writing the Flink record's timestamp into Kafka.
-
Constructor Summary
Constructors Constructor Description FlinkKafkaProducer(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema)FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig)Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)Deprecated.Creates aFlinkKafkaProducerfor a given topic.FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig)FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description protected voidabort(FlinkKafkaProducer.KafkaTransactionState transaction)Deprecated.protected voidacknowledgeMessage()Deprecated.ATTENTION to subclass implementors: When overriding this method, please always callsuper.acknowledgeMessage()to keep the invariants of the internal bookkeeping of the producer.protected FlinkKafkaProducer.KafkaTransactionStatebeginTransaction()Deprecated.protected voidcheckErroneous()Deprecated.voidclose()Deprecated.protected voidcommit(FlinkKafkaProducer.KafkaTransactionState transaction)Deprecated.protected FlinkKafkaInternalProducer<byte[],byte[]>createProducer()Deprecated.protected voidfinishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState> handledTransactions)Deprecated.protected static int[]getPartitionsByTopic(String topic, org.apache.kafka.clients.producer.Producer<byte[],byte[]> producer)Deprecated.static longgetTransactionTimeout(Properties producerConfig)Deprecated.FlinkKafkaProducer<IN>ignoreFailuresAfterTransactionTimeout()Deprecated.Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job.voidinitializeState(org.apache.flink.runtime.state.FunctionInitializationContext context)Deprecated.protected Optional<FlinkKafkaProducer.KafkaTransactionContext>initializeUserContext()Deprecated.voidinvoke(FlinkKafkaProducer.KafkaTransactionState transaction, IN next, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context)Deprecated.voidopen(org.apache.flink.configuration.Configuration configuration)Deprecated.Initializes the connection to Kafka.protected voidpreCommit(FlinkKafkaProducer.KafkaTransactionState transaction)Deprecated.protected voidrecoverAndAbort(FlinkKafkaProducer.KafkaTransactionState transaction)Deprecated.protected voidrecoverAndCommit(FlinkKafkaProducer.KafkaTransactionState transaction)Deprecated.voidsetLogFailuresOnly(boolean logFailuresOnly)Deprecated.Defines whether the producer should fail on errors, or only log them.voidsetTransactionalIdPrefix(String transactionalIdPrefix)Deprecated.Specifies the prefix of the transactional.id property to be used by the producers when communicating with Kafka.voidsetWriteTimestampToKafka(boolean writeTimestampToKafka)Deprecated.If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.voidsnapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context)Deprecated.-
Methods inherited from class org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction
currentTransaction, enableTransactionTimeoutWarnings, finish, finishProcessing, getUserContext, invoke, invoke, notifyCheckpointAborted, notifyCheckpointComplete, pendingTransactions, setTransactionTimeout
-
Methods inherited from class org.apache.flink.api.common.functions.AbstractRichFunction
getIterationRuntimeContext, getRuntimeContext, setRuntimeContext
-
-
-
-
Field Detail
-
SAFE_SCALE_DOWN_FACTOR
public static final int SAFE_SCALE_DOWN_FACTOR
Deprecated.This coefficient determines what is the safe scale down factor.If the Flink application previously failed before first checkpoint completed or we are starting new batch of
FlinkKafkaProducerfrom scratch without clean shutdown of the previous one,FlinkKafkaProducerdoesn't know what was the set of previously used Kafka's transactionalId's. In that case, it will try to play safe and abort all of the possible transactionalIds from the range of:[0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize * SAFE_SCALE_DOWN_FACTOR)The range of available to use transactional ids is:
[0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize)This means that if we decrease
getNumberOfParallelSubtasks()by a factor larger thanSAFE_SCALE_DOWN_FACTORwe can have a left some lingering transaction.- See Also:
- Constant Field Values
-
DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
public static final int DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
Deprecated.Default number of KafkaProducers in the pool. SeeFlinkKafkaProducer.Semantic.EXACTLY_ONCE.- See Also:
- Constant Field Values
-
DEFAULT_KAFKA_TRANSACTION_TIMEOUT
public static final org.apache.flink.api.common.time.Time DEFAULT_KAFKA_TRANSACTION_TIMEOUT
Deprecated.Default value for kafka transaction timeout.
-
KEY_DISABLE_METRICS
public static final String KEY_DISABLE_METRICS
Deprecated.Configuration key for disabling the metrics reporting.- See Also:
- Constant Field Values
-
producerConfig
protected final Properties producerConfig
Deprecated.User defined properties for the Producer.
-
defaultTopicId
protected final String defaultTopicId
Deprecated.The name of the default topic this producer is writing data to.
-
topicPartitionsMap
protected final Map<String,int[]> topicPartitionsMap
Deprecated.Partitions of each topic.
-
writeTimestampToKafka
protected boolean writeTimestampToKafka
Deprecated.Flag controlling whether we are writing the Flink record's timestamp into Kafka.
-
semantic
protected FlinkKafkaProducer.Semantic semantic
Deprecated.Semantic chosen for this instance.
-
callback
@Nullable protected transient org.apache.kafka.clients.producer.Callback callback
Deprecated.The callback than handles error propagation or logging callbacks.
-
asyncException
@Nullable protected transient volatile Exception asyncException
Deprecated.Errors encountered in the async producer are stored here.
-
pendingRecords
protected final AtomicLong pendingRecords
Deprecated.Number of unacknowledged records.
-
-
Constructor Detail
-
FlinkKafkaProducer
public FlinkKafkaProducer(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.- Parameters:
brokerList- Comma separated addresses of the brokerstopicId- ID of the Kafka topic.serializationSchema- User defined (keyless) serialization schema.
-
FlinkKafkaProducer
public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).To use a custom partitioner, please use
FlinkKafkaProducer(String, SerializationSchema, Properties, Optional)instead.- Parameters:
topicId- ID of the Kafka topic.serializationSchema- User defined key-less serialization schema.producerConfig- Properties with the producer configuration.
-
FlinkKafkaProducer
public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-lessSerializationSchemaand possibly a customFlinkKafkaPartitioner.Since a key-less
SerializationSchemais used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
topicId- The topic to write data toserializationSchema- A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.
-
FlinkKafkaProducer
public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-lessSerializationSchemaand possibly a customFlinkKafkaPartitioner.Since a key-less
SerializationSchemais used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
topicId- The topic to write data toserializationSchema- A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).kafkaProducersPoolSize- Overwrite default KafkaProducers pool size (seeFlinkKafkaProducer.Semantic.EXACTLY_ONCE).
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).To use a custom partitioner, please use
FlinkKafkaProducer(String, KeyedSerializationSchema, Properties, Optional)instead.- Parameters:
brokerList- Comma separated addresses of the brokerstopicId- ID of the Kafka topic.serializationSchema- User defined serialization schema supporting key/value messages
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).To use a custom partitioner, please use
FlinkKafkaProducer(String, KeyedSerializationSchema, Properties, Optional)instead.- Parameters:
topicId- ID of the Kafka topic.serializationSchema- User defined serialization schema supporting key/value messagesproducerConfig- Properties with the producer configuration.
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).- Parameters:
topicId- ID of the Kafka topic.serializationSchema- User defined serialization schema supporting key/value messagesproducerConfig- Properties with the producer configuration.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a keyedKeyedSerializationSchemaand possibly a customFlinkKafkaPartitioner.If a partitioner is not provided, written records will be partitioned by the attached key of each record (as determined by
KeyedSerializationSchema.serializeKey(Object)). If written records do not have a key (i.e.,KeyedSerializationSchema.serializeKey(Object)returnsnull), they will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
defaultTopicId- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be partitioned by the key of each record (determined byKeyedSerializationSchema.serializeKey(Object)). If the keys arenull, then records will be distributed to Kafka partitions in a round-robin fashion.
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a keyedKeyedSerializationSchemaand possibly a customFlinkKafkaPartitioner.If a partitioner is not provided, written records will be partitioned by the attached key of each record (as determined by
KeyedSerializationSchema.serializeKey(Object)). If written records do not have a key (i.e.,KeyedSerializationSchema.serializeKey(Object)returnsnull), they will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
defaultTopicId- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be partitioned by the key of each record (determined byKeyedSerializationSchema.serializeKey(Object)). If the keys arenull, then records will be distributed to Kafka partitions in a round-robin fashion.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).kafkaProducersPoolSize- Overwrite default KafkaProducers pool size (seeFlinkKafkaProducer.Semantic.EXACTLY_ONCE).
-
FlinkKafkaProducer
public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic)
Deprecated.Creates aFlinkKafkaProducerfor a given topic. The sink produces its input to the topic. It accepts aKafkaSerializationSchemafor serializing records to aProducerRecord, including partitioning information.- Parameters:
defaultTopic- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).
-
FlinkKafkaProducer
public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize)
Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts aKafkaSerializationSchemaand possibly a customFlinkKafkaPartitioner.- Parameters:
defaultTopic- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).kafkaProducersPoolSize- Overwrite default KafkaProducers pool size (seeFlinkKafkaProducer.Semantic.EXACTLY_ONCE).
-
-
Method Detail
-
setWriteTimestampToKafka
public void setWriteTimestampToKafka(boolean writeTimestampToKafka)
Deprecated.If set to true, Flink will write the (event time) timestamp attached to each record into Kafka. Timestamps must be positive for Kafka to accept them.- Parameters:
writeTimestampToKafka- Flag indicating if Flink's internal timestamps are written to Kafka.
-
setLogFailuresOnly
public void setLogFailuresOnly(boolean logFailuresOnly)
Deprecated.Defines whether the producer should fail on errors, or only log them. If this is set to true, then exceptions will be only logged, if set to false, exceptions will be eventually thrown and cause the streaming program to fail (and enter recovery).- Parameters:
logFailuresOnly- The flag to indicate logging-only on exceptions.
-
setTransactionalIdPrefix
public void setTransactionalIdPrefix(String transactionalIdPrefix)
Deprecated.Specifies the prefix of the transactional.id property to be used by the producers when communicating with Kafka. If not set, the transactional.id will be prefixed withtaskName + "-" + operatorUid.Note that, if we change the prefix when the Flink application previously failed before first checkpoint completed or we are starting new batch of
FlinkKafkaProducerfrom scratch without clean shutdown of the previous one, since we don't know what was the previously used transactional.id prefix, there will be some lingering transactions left.- Parameters:
transactionalIdPrefix- the transactional.id prefix- Throws:
NullPointerException- Thrown, if the transactionalIdPrefix was null.
-
ignoreFailuresAfterTransactionTimeout
public FlinkKafkaProducer<IN> ignoreFailuresAfterTransactionTimeout()
Deprecated.Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job. If a Kafka transaction is timed out, a commit will never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions will still be logged to inform the user that data loss might have occurred.Note that we use
System.currentTimeMillis()to track the age of a transaction. Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will attempt at least one commit of the transaction before giving up.- Overrides:
ignoreFailuresAfterTransactionTimeoutin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
open
public void open(org.apache.flink.configuration.Configuration configuration) throws ExceptionDeprecated.Initializes the connection to Kafka.- Specified by:
openin interfaceorg.apache.flink.api.common.functions.RichFunction- Overrides:
openin classorg.apache.flink.api.common.functions.AbstractRichFunction- Throws:
Exception
-
invoke
public void invoke(FlinkKafkaProducer.KafkaTransactionState transaction, IN next, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) throws FlinkKafkaException
Deprecated.- Specified by:
invokein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>- Throws:
FlinkKafkaException
-
close
public void close() throws FlinkKafkaExceptionDeprecated.- Specified by:
closein interfaceorg.apache.flink.api.common.functions.RichFunction- Overrides:
closein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>- Throws:
FlinkKafkaException
-
beginTransaction
protected FlinkKafkaProducer.KafkaTransactionState beginTransaction() throws FlinkKafkaException
Deprecated.- Specified by:
beginTransactionin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>- Throws:
FlinkKafkaException
-
preCommit
protected void preCommit(FlinkKafkaProducer.KafkaTransactionState transaction) throws FlinkKafkaException
Deprecated.- Specified by:
preCommitin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>- Throws:
FlinkKafkaException
-
commit
protected void commit(FlinkKafkaProducer.KafkaTransactionState transaction)
Deprecated.- Specified by:
commitin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
recoverAndCommit
protected void recoverAndCommit(FlinkKafkaProducer.KafkaTransactionState transaction)
Deprecated.- Overrides:
recoverAndCommitin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
abort
protected void abort(FlinkKafkaProducer.KafkaTransactionState transaction)
Deprecated.- Specified by:
abortin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
recoverAndAbort
protected void recoverAndAbort(FlinkKafkaProducer.KafkaTransactionState transaction)
Deprecated.- Overrides:
recoverAndAbortin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
acknowledgeMessage
protected void acknowledgeMessage()
Deprecated.ATTENTION to subclass implementors: When overriding this method, please always callsuper.acknowledgeMessage()to keep the invariants of the internal bookkeeping of the producer. If not, be sure to know what you are doing.
-
snapshotState
public void snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context) throws ExceptionDeprecated.- Specified by:
snapshotStatein interfaceorg.apache.flink.streaming.api.checkpoint.CheckpointedFunction- Overrides:
snapshotStatein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>- Throws:
Exception
-
initializeState
public void initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) throws ExceptionDeprecated.- Specified by:
initializeStatein interfaceorg.apache.flink.streaming.api.checkpoint.CheckpointedFunction- Overrides:
initializeStatein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>- Throws:
Exception
-
initializeUserContext
protected Optional<FlinkKafkaProducer.KafkaTransactionContext> initializeUserContext()
Deprecated.- Overrides:
initializeUserContextin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
finishRecoveringContext
protected void finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState> handledTransactions)
Deprecated.- Overrides:
finishRecoveringContextin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState,FlinkKafkaProducer.KafkaTransactionContext>
-
createProducer
protected FlinkKafkaInternalProducer<byte[],byte[]> createProducer()
Deprecated.
-
checkErroneous
protected void checkErroneous() throws FlinkKafkaExceptionDeprecated.- Throws:
FlinkKafkaException
-
getPartitionsByTopic
protected static int[] getPartitionsByTopic(String topic, org.apache.kafka.clients.producer.Producer<byte[],byte[]> producer)
Deprecated.
-
getTransactionTimeout
public static long getTransactionTimeout(Properties producerConfig)
Deprecated.
-
-