Class FlinkKafkaInternalProducer<K,V>
- java.lang.Object
-
- org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer<K,V>
-
- All Implemented Interfaces:
Closeable,AutoCloseable,org.apache.kafka.clients.producer.Producer<K,V>
@PublicEvolving public class FlinkKafkaInternalProducer<K,V> extends Object implements org.apache.kafka.clients.producer.Producer<K,V>
Internal flink kafka producer.
-
-
Field Summary
Fields Modifier and Type Field Description protected org.apache.kafka.clients.producer.KafkaProducer<K,V>kafkaProducerprotected StringtransactionalId
-
Constructor Summary
Constructors Constructor Description FlinkKafkaInternalProducer(Properties properties)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description voidabortTransaction()voidbeginTransaction()voidclose()voidclose(java.time.Duration duration)voidcommitTransaction()voidflush()protected static Enum<?>getEnum(String enumFullName)shortgetEpoch()protected static ObjectgetField(Object object, String fieldName)Gets and returns the fieldfieldNamefrom the given Objectobjectusing reflection.longgetProducerId()StringgetTransactionalId()intgetTransactionCoordinatorId()voidinitTransactions()protected static Objectinvoke(Object object, String methodName, Object... args)Map<org.apache.kafka.common.MetricName,? extends org.apache.kafka.common.Metric>metrics()List<org.apache.kafka.common.PartitionInfo>partitionsFor(String topic)voidresumeTransaction(long producerId, short epoch)Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart.Future<org.apache.kafka.clients.producer.RecordMetadata>send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record)Future<org.apache.kafka.clients.producer.RecordMetadata>send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record, org.apache.kafka.clients.producer.Callback callback)voidsendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets, String consumerGroupId)voidsendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> map, org.apache.kafka.clients.consumer.ConsumerGroupMetadata consumerGroupMetadata)protected static voidsetField(Object object, String fieldName, Object value)Sets the fieldfieldNameon the given Objectobjecttovalueusing reflection.
-
-
-
Constructor Detail
-
FlinkKafkaInternalProducer
public FlinkKafkaInternalProducer(Properties properties)
-
-
Method Detail
-
initTransactions
public void initTransactions()
-
beginTransaction
public void beginTransaction() throws org.apache.kafka.common.errors.ProducerFencedException
-
commitTransaction
public void commitTransaction() throws org.apache.kafka.common.errors.ProducerFencedException
-
abortTransaction
public void abortTransaction() throws org.apache.kafka.common.errors.ProducerFencedException
-
sendOffsetsToTransaction
public void sendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> offsets, String consumerGroupId) throws org.apache.kafka.common.errors.ProducerFencedException
-
sendOffsetsToTransaction
public void sendOffsetsToTransaction(Map<org.apache.kafka.common.TopicPartition,org.apache.kafka.clients.consumer.OffsetAndMetadata> map, org.apache.kafka.clients.consumer.ConsumerGroupMetadata consumerGroupMetadata) throws org.apache.kafka.common.errors.ProducerFencedException
-
send
public Future<org.apache.kafka.clients.producer.RecordMetadata> send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record)
-
send
public Future<org.apache.kafka.clients.producer.RecordMetadata> send(org.apache.kafka.clients.producer.ProducerRecord<K,V> record, org.apache.kafka.clients.producer.Callback callback)
-
metrics
public Map<org.apache.kafka.common.MetricName,? extends org.apache.kafka.common.Metric> metrics()
-
close
public void close()
-
close
public void close(java.time.Duration duration)
-
flush
public void flush()
-
resumeTransaction
public void resumeTransaction(long producerId, short epoch)Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart. Implementation of this method is based onKafkaProducer.initTransactions(). https://github.com/apache/kafka/commit/5d2422258cb975a137a42a4e08f03573c49a387e#diff-f4ef1afd8792cd2a2e9069cd7ddea630
-
getTransactionalId
public String getTransactionalId()
-
getProducerId
public long getProducerId()
-
getEpoch
public short getEpoch()
-
getTransactionCoordinatorId
@VisibleForTesting public int getTransactionCoordinatorId()
-
getField
protected static Object getField(Object object, String fieldName)
Gets and returns the fieldfieldNamefrom the given Objectobjectusing reflection.
-
-