Class FlinkKafkaProducer<IN>
- All Implemented Interfaces:
Serializable,org.apache.flink.api.common.functions.Function,org.apache.flink.api.common.functions.RichFunction,org.apache.flink.api.common.state.CheckpointListener,org.apache.flink.streaming.api.checkpoint.CheckpointedFunction,org.apache.flink.streaming.api.functions.sink.SinkFunction<IN>
- Direct Known Subclasses:
FlinkKafkaShuffleProducer
FlinkKafkaProducer.Semantic.AT_LEAST_ONCE semantic. Before using FlinkKafkaProducer.Semantic.EXACTLY_ONCE please refer to
Flink's Kafka connector documentation.- See Also:
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic classDeprecated.TypeSerializerforFlinkKafkaProducer.KafkaTransactionContext.static classDeprecated.Context associated to this instance of theFlinkKafkaProducer.static classDeprecated.State for handling transactions.static classDeprecated.Keep information required to deduce next safe to use transactional id.static classDeprecated.TypeSerializerforFlinkKafkaProducer.NextTransactionalIdHint.static enumDeprecated.Semantics that can be chosen.static classDeprecated.TypeSerializerforFlinkKafkaProducer.KafkaTransactionState.Nested classes/interfaces inherited from class org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction
org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.State<TXN,CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializer<TXN, CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.StateSerializerSnapshot<TXN, CONTEXT>, org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction.TransactionHolder<TXN> Nested classes/interfaces inherited from interface org.apache.flink.streaming.api.functions.sink.SinkFunction
org.apache.flink.streaming.api.functions.sink.SinkFunction.Context -
Field Summary
FieldsModifier and TypeFieldDescriptionprotected ExceptionDeprecated.Errors encountered in the async producer are stored here.protected org.apache.kafka.clients.producer.CallbackDeprecated.The callback than handles error propagation or logging callbacks.static final intDeprecated.Default number of KafkaProducers in the pool.static final org.apache.flink.api.common.time.TimeDeprecated.Default value for kafka transaction timeout.protected final StringDeprecated.The name of the default topic this producer is writing data to.static final StringDeprecated.Configuration key for disabling the metrics reporting.protected final AtomicLongDeprecated.Number of unacknowledged records.protected final PropertiesDeprecated.User defined properties for the Producer.static final intDeprecated.This coefficient determines what is the safe scale down factor.protected FlinkKafkaProducer.SemanticDeprecated.Semantic chosen for this instance.Deprecated.Partitions of each topic.protected booleanDeprecated.Flag controlling whether we are writing the Flink record's timestamp into Kafka.Fields inherited from class org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction
pendingCommitTransactions, state, userContext -
Constructor Summary
ConstructorsConstructorDescriptionFlinkKafkaProducer(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema) Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema) FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig) Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner) Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaPartitioner<IN> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize) Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic) Deprecated.Creates aFlinkKafkaProducerfor a given topic.FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize) Deprecated.Creates a FlinkKafkaProducer for a given topic.FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig) FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner) FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize) FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic) -
Method Summary
Modifier and TypeMethodDescriptionprotected voidabort(FlinkKafkaProducer.KafkaTransactionState transaction) Deprecated.protected voidDeprecated.ATTENTION to subclass implementors: When overriding this method, please always callsuper.acknowledgeMessage()to keep the invariants of the internal bookkeeping of the producer.protected FlinkKafkaProducer.KafkaTransactionStateDeprecated.protected voidDeprecated.voidclose()Deprecated.protected voidcommit(FlinkKafkaProducer.KafkaTransactionState transaction) Deprecated.protected FlinkKafkaInternalProducer<byte[], byte[]> Deprecated.protected voidfinishProcessing(FlinkKafkaProducer.KafkaTransactionState transaction) Deprecated.protected voidfinishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState> handledTransactions) Deprecated.protected static int[]getPartitionsByTopic(String topic, org.apache.kafka.clients.producer.Producer<byte[], byte[]> producer) Deprecated.static longgetTransactionTimeout(Properties producerConfig) Deprecated.Deprecated.Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job.voidinitializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) Deprecated.Deprecated.voidinvoke(FlinkKafkaProducer.KafkaTransactionState transaction, IN next, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) Deprecated.voidopen(org.apache.flink.configuration.Configuration configuration) Deprecated.Initializes the connection to Kafka.protected voidpreCommit(FlinkKafkaProducer.KafkaTransactionState transaction) Deprecated.protected voidrecoverAndAbort(FlinkKafkaProducer.KafkaTransactionState transaction) Deprecated.protected voidrecoverAndCommit(FlinkKafkaProducer.KafkaTransactionState transaction) Deprecated.voidsetLogFailuresOnly(boolean logFailuresOnly) Deprecated.Defines whether the producer should fail on errors, or only log them.voidsetTransactionalIdPrefix(String transactionalIdPrefix) Deprecated.Specifies the prefix of the transactional.id property to be used by the producers when communicating with Kafka.voidsetWriteTimestampToKafka(boolean writeTimestampToKafka) Deprecated.If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.voidsnapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context) Deprecated.Methods inherited from class org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction
currentTransaction, enableTransactionTimeoutWarnings, finish, getUserContext, invoke, invoke, notifyCheckpointAborted, notifyCheckpointComplete, pendingTransactions, setTransactionTimeoutMethods inherited from class org.apache.flink.api.common.functions.AbstractRichFunction
getIterationRuntimeContext, getRuntimeContext, setRuntimeContextMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.flink.api.common.functions.RichFunction
openMethods inherited from interface org.apache.flink.streaming.api.functions.sink.SinkFunction
writeWatermark
-
Field Details
-
SAFE_SCALE_DOWN_FACTOR
public static final int SAFE_SCALE_DOWN_FACTORDeprecated.This coefficient determines what is the safe scale down factor.If the Flink application previously failed before first checkpoint completed or we are starting new batch of
FlinkKafkaProducerfrom scratch without clean shutdown of the previous one,FlinkKafkaProducerdoesn't know what was the set of previously used Kafka's transactionalId's. In that case, it will try to play safe and abort all of the possible transactionalIds from the range of:[0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize * SAFE_SCALE_DOWN_FACTOR)The range of available to use transactional ids is:
[0, getNumberOfParallelSubtasks() * kafkaProducersPoolSize)This means that if we decrease
getNumberOfParallelSubtasks()by a factor larger thanSAFE_SCALE_DOWN_FACTORwe can have a left some lingering transaction.- See Also:
-
DEFAULT_KAFKA_PRODUCERS_POOL_SIZE
public static final int DEFAULT_KAFKA_PRODUCERS_POOL_SIZEDeprecated.Default number of KafkaProducers in the pool. SeeFlinkKafkaProducer.Semantic.EXACTLY_ONCE.- See Also:
-
DEFAULT_KAFKA_TRANSACTION_TIMEOUT
public static final org.apache.flink.api.common.time.Time DEFAULT_KAFKA_TRANSACTION_TIMEOUTDeprecated.Default value for kafka transaction timeout. -
KEY_DISABLE_METRICS
Deprecated.Configuration key for disabling the metrics reporting.- See Also:
-
producerConfig
Deprecated.User defined properties for the Producer. -
defaultTopicId
Deprecated.The name of the default topic this producer is writing data to. -
topicPartitionsMap
Deprecated.Partitions of each topic. -
writeTimestampToKafka
protected boolean writeTimestampToKafkaDeprecated.Flag controlling whether we are writing the Flink record's timestamp into Kafka. -
semantic
Deprecated.Semantic chosen for this instance. -
callback
Deprecated.The callback than handles error propagation or logging callbacks. -
asyncException
Deprecated.Errors encountered in the async producer are stored here. -
pendingRecords
Deprecated.Number of unacknowledged records.
-
-
Constructor Details
-
FlinkKafkaProducer
public FlinkKafkaProducer(String brokerList, String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema) Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.- Parameters:
brokerList- Comma separated addresses of the brokerstopicId- ID of the Kafka topic.serializationSchema- User defined (keyless) serialization schema.
-
FlinkKafkaProducer
public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig) Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).To use a custom partitioner, please use
FlinkKafkaProducer(String, SerializationSchema, Properties, Optional)instead.- Parameters:
topicId- ID of the Kafka topic.serializationSchema- User defined key-less serialization schema.producerConfig- Properties with the producer configuration.
-
FlinkKafkaProducer
public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner) Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-lessSerializationSchemaand possibly a customFlinkKafkaPartitioner.Since a key-less
SerializationSchemais used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
topicId- The topic to write data toserializationSchema- A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.
-
FlinkKafkaProducer
public FlinkKafkaProducer(String topicId, org.apache.flink.api.common.serialization.SerializationSchema<IN> serializationSchema, Properties producerConfig, @Nullable FlinkKafkaPartitioner<IN> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize) Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a key-lessSerializationSchemaand possibly a customFlinkKafkaPartitioner.Since a key-less
SerializationSchemais used, all records sent to Kafka will not have an attached key. Therefore, if a partitioner is also not provided, records will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
topicId- The topic to write data toserializationSchema- A key-less serializable serialization schema for turning user objects into a kafka-consumable byte[]producerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be distributed to Kafka partitions in a round-robin fashion.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).kafkaProducersPoolSize- Overwrite default KafkaProducers pool size (seeFlinkKafkaProducer.Semantic.EXACTLY_ONCE).
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String brokerList, String topicId, KeyedSerializationSchema<IN> serializationSchema) Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).To use a custom partitioner, please use
FlinkKafkaProducer(String, KeyedSerializationSchema, Properties, Optional)instead.- Parameters:
brokerList- Comma separated addresses of the brokerstopicId- ID of the Kafka topic.serializationSchema- User defined serialization schema supporting key/value messages
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig) Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).To use a custom partitioner, please use
FlinkKafkaProducer(String, KeyedSerializationSchema, Properties, Optional)instead.- Parameters:
topicId- ID of the Kafka topic.serializationSchema- User defined serialization schema supporting key/value messagesproducerConfig- Properties with the producer configuration.
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String topicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic) Creates a FlinkKafkaProducer for a given topic. The sink produces a DataStream to the topic.Using this constructor, the default
FlinkFixedPartitionerwill be used as the partitioner. This default partitioner maps each sink subtask to a single Kafka partition (i.e. all records received by a sink subtask will end up in the same Kafka partition).- Parameters:
topicId- ID of the Kafka topic.serializationSchema- User defined serialization schema supporting key/value messagesproducerConfig- Properties with the producer configuration.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner) Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a keyedKeyedSerializationSchemaand possibly a customFlinkKafkaPartitioner.If a partitioner is not provided, written records will be partitioned by the attached key of each record (as determined by
KeyedSerializationSchema.serializeKey(Object)). If written records do not have a key (i.e.,KeyedSerializationSchema.serializeKey(Object)returnsnull), they will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
defaultTopicId- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be partitioned by the key of each record (determined byKeyedSerializationSchema.serializeKey(Object)). If the keys arenull, then records will be distributed to Kafka partitions in a round-robin fashion.
-
FlinkKafkaProducer
@Deprecated public FlinkKafkaProducer(String defaultTopicId, KeyedSerializationSchema<IN> serializationSchema, Properties producerConfig, Optional<FlinkKafkaPartitioner<IN>> customPartitioner, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize) Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts a keyedKeyedSerializationSchemaand possibly a customFlinkKafkaPartitioner.If a partitioner is not provided, written records will be partitioned by the attached key of each record (as determined by
KeyedSerializationSchema.serializeKey(Object)). If written records do not have a key (i.e.,KeyedSerializationSchema.serializeKey(Object)returnsnull), they will be distributed to Kafka partitions in a round-robin fashion.- Parameters:
defaultTopicId- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.customPartitioner- A serializable partitioner for assigning messages to Kafka partitions. If a partitioner is not provided, records will be partitioned by the key of each record (determined byKeyedSerializationSchema.serializeKey(Object)). If the keys arenull, then records will be distributed to Kafka partitions in a round-robin fashion.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).kafkaProducersPoolSize- Overwrite default KafkaProducers pool size (seeFlinkKafkaProducer.Semantic.EXACTLY_ONCE).
-
FlinkKafkaProducer
public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic) Deprecated.Creates aFlinkKafkaProducerfor a given topic. The sink produces its input to the topic. It accepts aKafkaSerializationSchemafor serializing records to aProducerRecord, including partitioning information.- Parameters:
defaultTopic- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).
-
FlinkKafkaProducer
public FlinkKafkaProducer(String defaultTopic, KafkaSerializationSchema<IN> serializationSchema, Properties producerConfig, FlinkKafkaProducer.Semantic semantic, int kafkaProducersPoolSize) Deprecated.Creates a FlinkKafkaProducer for a given topic. The sink produces its input to the topic. It accepts aKafkaSerializationSchemaand possibly a customFlinkKafkaPartitioner.- Parameters:
defaultTopic- The default topic to write data toserializationSchema- A serializable serialization schema for turning user objects into a kafka-consumable byte[] supporting key/value messagesproducerConfig- Configuration properties for the KafkaProducer. 'bootstrap.servers.' is the only required argument.semantic- Defines semantic that will be used by this producer (seeFlinkKafkaProducer.Semantic).kafkaProducersPoolSize- Overwrite default KafkaProducers pool size (seeFlinkKafkaProducer.Semantic.EXACTLY_ONCE).
-
-
Method Details
-
setWriteTimestampToKafka
public void setWriteTimestampToKafka(boolean writeTimestampToKafka) Deprecated.If set to true, Flink will write the (event time) timestamp attached to each record into Kafka. Timestamps must be positive for Kafka to accept them.- Parameters:
writeTimestampToKafka- Flag indicating if Flink's internal timestamps are written to Kafka.
-
setLogFailuresOnly
public void setLogFailuresOnly(boolean logFailuresOnly) Deprecated.Defines whether the producer should fail on errors, or only log them. If this is set to true, then exceptions will be only logged, if set to false, exceptions will be eventually thrown and cause the streaming program to fail (and enter recovery).- Parameters:
logFailuresOnly- The flag to indicate logging-only on exceptions.
-
setTransactionalIdPrefix
Deprecated.Specifies the prefix of the transactional.id property to be used by the producers when communicating with Kafka. If not set, the transactional.id will be prefixed withtaskName + "-" + operatorUid.Note that, if we change the prefix when the Flink application previously failed before first checkpoint completed or we are starting new batch of
FlinkKafkaProducerfrom scratch without clean shutdown of the previous one, since we don't know what was the previously used transactional.id prefix, there will be some lingering transactions left.- Parameters:
transactionalIdPrefix- the transactional.id prefix- Throws:
NullPointerException- Thrown, if the transactionalIdPrefix was null.
-
ignoreFailuresAfterTransactionTimeout
Deprecated.Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job. If a Kafka transaction is timed out, a commit will never be successful. Hence, use this feature to avoid recovery loops of the Job. Exceptions will still be logged to inform the user that data loss might have occurred.Note that we use
System.currentTimeMillis()to track the age of a transaction. Moreover, only exceptions thrown during the recovery are caught, i.e., the producer will attempt at least one commit of the transaction before giving up.- Overrides:
ignoreFailuresAfterTransactionTimeoutin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
open
Deprecated.Initializes the connection to Kafka.- Specified by:
openin interfaceorg.apache.flink.api.common.functions.RichFunction- Overrides:
openin classorg.apache.flink.api.common.functions.AbstractRichFunction- Throws:
Exception
-
invoke
public void invoke(FlinkKafkaProducer.KafkaTransactionState transaction, IN next, org.apache.flink.streaming.api.functions.sink.SinkFunction.Context context) throws FlinkKafkaException Deprecated.- Specified by:
invokein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext> - Throws:
FlinkKafkaException
-
close
Deprecated.- Specified by:
closein interfaceorg.apache.flink.api.common.functions.RichFunction- Overrides:
closein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext> - Throws:
FlinkKafkaException
-
beginTransaction
Deprecated.- Specified by:
beginTransactionin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext> - Throws:
FlinkKafkaException
-
preCommit
protected void preCommit(FlinkKafkaProducer.KafkaTransactionState transaction) throws FlinkKafkaException Deprecated.- Specified by:
preCommitin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext> - Throws:
FlinkKafkaException
-
commit
Deprecated.- Specified by:
commitin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
recoverAndCommit
Deprecated.- Overrides:
recoverAndCommitin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
abort
Deprecated.- Specified by:
abortin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
recoverAndAbort
Deprecated.- Overrides:
recoverAndAbortin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
acknowledgeMessage
protected void acknowledgeMessage()Deprecated.ATTENTION to subclass implementors: When overriding this method, please always callsuper.acknowledgeMessage()to keep the invariants of the internal bookkeeping of the producer. If not, be sure to know what you are doing. -
snapshotState
public void snapshotState(org.apache.flink.runtime.state.FunctionSnapshotContext context) throws Exception Deprecated.- Specified by:
snapshotStatein interfaceorg.apache.flink.streaming.api.checkpoint.CheckpointedFunction- Overrides:
snapshotStatein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext> - Throws:
Exception
-
initializeState
public void initializeState(org.apache.flink.runtime.state.FunctionInitializationContext context) throws Exception Deprecated.- Specified by:
initializeStatein interfaceorg.apache.flink.streaming.api.checkpoint.CheckpointedFunction- Overrides:
initializeStatein classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext> - Throws:
Exception
-
initializeUserContext
Deprecated.- Overrides:
initializeUserContextin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
finishProcessing
Deprecated.- Overrides:
finishProcessingin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
finishRecoveringContext
protected void finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState> handledTransactions) Deprecated.- Overrides:
finishRecoveringContextin classorg.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction<IN,FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionContext>
-
createProducer
Deprecated. -
checkErroneous
Deprecated.- Throws:
FlinkKafkaException
-
getPartitionsByTopic
protected static int[] getPartitionsByTopic(String topic, org.apache.kafka.clients.producer.Producer<byte[], byte[]> producer) Deprecated. -
getTransactionTimeout
Deprecated.
-
KafkaSink.