A B C D E F G H I J K L M N O P R S T U V W
All Classes All Packages
All Classes All Packages
All Classes All Packages
A
- abort(FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- abortTransaction() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- AbstractFetcher<T,KPH> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Base class for all fetchers, which implement the connections to Kafka brokers and pull records from Kafka partitions.
- AbstractFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, MetricGroup, boolean) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
- AbstractPartitionDiscoverer - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Base class for all partition discoverers.
- AbstractPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
- AbstractPartitionDiscoverer.ClosedException - Exception in org.apache.flink.streaming.connectors.kafka.internals
-
Thrown if this discoverer was used to discover partitions after it was closed.
- AbstractPartitionDiscoverer.WakeupException - Exception in org.apache.flink.streaming.connectors.kafka.internals
-
Signaling exception to indicate that an actual Kafka call was interrupted.
- acknowledgeMessage() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.ATTENTION to subclass implementors: When overriding this method, please always call
super.acknowledgeMessage()to keep the invariants of the internal bookkeeping of the producer. - add(E) - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Adds the element to the queue, or fails with an exception, if the queue is closed.
- addDiscoveredPartitions(List<KafkaTopicPartition>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Adds a list of newly discovered partitions to the fetcher for consuming.
- addIfOpen(E) - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Tries to add an element to the queue, if the queue is still open.
- addReader(int) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- addSplitsBack(List<KafkaPartitionSplit>, int) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- adjustAutoCommitConfig(Properties, OffsetCommitMode) - Static method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Make sure that auto commit is disabled when our offset commit mode is ON_CHECKPOINTS.
- ALL - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
- applyReadableMetadata(List<String>, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- applyWatermark(WatermarkStrategy<RowData>) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- applyWritableMetadata(List<String>, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- asRecord() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
- assign(String, int, int) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
- assign(KafkaTopicPartition, int) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
-
Returns the index of the target subtask that a specific Kafka partition should be assigned to.
- assignedPartitions() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumState
- assignTimestampsAndWatermarks(WatermarkStrategy<T>) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Sets the given
WatermarkStrategyon this consumer. - assignTimestampsAndWatermarks(AssignerWithPeriodicWatermarks<T>) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Deprecated.Please use
FlinkKafkaConsumerBase.assignTimestampsAndWatermarks(WatermarkStrategy)instead. - assignTimestampsAndWatermarks(AssignerWithPunctuatedWatermarks<T>) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Deprecated.Please use
FlinkKafkaConsumerBase.assignTimestampsAndWatermarks(WatermarkStrategy)instead. - asSummaryString() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- asSummaryString() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- asWatermark() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
- asyncException - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Errors encountered in the async producer are stored here.
- asyncException - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Errors encountered in the async producer are stored here.
- AT_LEAST_ONCE - org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
Deprecated.Semantic.AT_LEAST_ONCE the Flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
B
- beginningOffsets(Collection<TopicPartition>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
List beginning offsets for the specified partitions.
- beginningOffsets(Collection<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- beginTransaction() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- beginTransaction() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- boundedMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The bounded mode for the contained consumer (default is an unbounded data stream).
- BoundedMode - Enum in org.apache.flink.streaming.connectors.kafka.config
-
End modes for the Kafka Consumer.
- boundedTimestampMillis - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The bounded timestamp to locate partition offsets; only relevant when bounded mode is
BoundedMode.TIMESTAMP. - build() - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Constructs the
KafkaRecordSerializationSchemaBuilderwith the configured properties. - build() - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Constructs the
KafkaSinkwith the configured properties. - build() - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Build the
KafkaSource. - builder() - Static method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Creates a default schema builder to provide common building blocks i.e.
- builder() - Static method in class org.apache.flink.connector.kafka.sink.KafkaSink
-
Create a
KafkaSinkBuilderto construct a newKafkaSink. - builder() - Static method in class org.apache.flink.connector.kafka.source.KafkaSource
-
Get a kafkaSourceBuilder to build a
KafkaSource. - BYTES_CONSUMED_TOTAL - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
C
- callback - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.The callback than handles error propagation or logging callbacks.
- callback - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
The callback than handles error propagation or logging callbacks.
- cancel() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- cancel() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
- cancel() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
- checkAndThrowException() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
-
Checks whether an exception has been set via
ExceptionProxy.reportError(Throwable). - checkErroneous() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- checkErroneous() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
- checkpointLock - Variable in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
The lock that guarantees that record emission and state updates are atomic, from the view of taking a checkpoint.
- CLIENT_ID_PREFIX - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- ClosableBlockingQueue<E> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A special form of blocking queue with two additions: The queue can be closed atomically when empty.
- ClosableBlockingQueue() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Creates a new empty queue.
- ClosableBlockingQueue(int) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Creates a new empty queue, reserving space for at least the specified number of elements.
- ClosableBlockingQueue(Collection<? extends E>) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Creates a new queue that contains the given elements.
- close() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- close() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- close() - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- close() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- close() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- close() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
- close() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Closes the partition discoverer, cleaning up all Kafka connections.
- close() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Tries to close the queue.
- close() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- close() - Method in class org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Closes the handover.
- close(Duration) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- closeConnections() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Close all established connections.
- closeConnections() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
- ClosedException() - Constructor for exception org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.ClosedException
- ClosedException() - Constructor for exception org.apache.flink.streaming.connectors.kafka.internals.Handover.ClosedException
- commit(FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- COMMIT_OFFSETS_ON_CHECKPOINT - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- commitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Commits the given partition offsets to the Kafka brokers (or to ZooKeeper for older Kafka versions).
- commitOffsets(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - Method in class org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
- COMMITS_FAILED_METRIC_COUNTER - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- COMMITS_FAILED_METRICS_COUNTER - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- COMMITS_SUCCEEDED_METRIC_COUNTER - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- COMMITS_SUCCEEDED_METRICS_COUNTER - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- COMMITTED_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- COMMITTED_OFFSET_METRIC_GAUGE - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- COMMITTED_OFFSETS_METRICS_GAUGE - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- committedOffsets() - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets to the committed offsets. - committedOffsets(Collection<TopicPartition>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
The group id should be the set for
KafkaSourcebefore invoking this method. - committedOffsets(Collection<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- committedOffsets(OffsetResetStrategy) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets to the committed offsets. - commitTransaction() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- Comparator() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition.Comparator
- compare(KafkaTopicPartition, KafkaTopicPartition) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition.Comparator
- consumedDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Data type of consumed data type.
- CONSUMER_FETCH_MANAGER_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- ContextStateSerializer() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- ContextStateSerializer() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.ContextStateSerializer
- ContextStateSerializerSnapshot() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer.ContextStateSerializerSnapshot
-
Deprecated.
- ContextStateSerializerSnapshot() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.ContextStateSerializer.ContextStateSerializerSnapshot
- copy() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- copy() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- copy(DataInputView, DataOutputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- copy(DataInputView, DataOutputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- copy(DataInputView, DataOutputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- copy(FlinkKafkaProducer.KafkaTransactionContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- copy(FlinkKafkaProducer.KafkaTransactionContext, FlinkKafkaProducer.KafkaTransactionContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- copy(FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- copy(FlinkKafkaProducer.KafkaTransactionState, FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- copy(FlinkKafkaProducer.NextTransactionalIdHint) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- copy(FlinkKafkaProducer.NextTransactionalIdHint, FlinkKafkaProducer.NextTransactionalIdHint) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- createCommitter() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- createDynamicTableSink(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createDynamicTableSink(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- createDynamicTableSource(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createDynamicTableSource(DynamicTableFactory.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- createEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.
- createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Creates the fetcher that connect to the Kafka brokers, pulls data, deserialized the data, and emits it into the data streams.
- createFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, StreamingRuntimeContext, OffsetCommitMode, MetricGroup, boolean) - Method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleConsumer
- createInstance() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- createInstance() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- createInstance() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- createKafkaPartitionHandle(KafkaTopicPartition) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Creates the Kafka version specific representation of the given topic partition.
- createKafkaPartitionHandle(KafkaTopicPartition) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
- createKafkaSource(DeserializationSchema<RowData>, DeserializationSchema<RowData>, TypeInformation<RowData>) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- createKafkaTableSink(DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, String, Properties, FlinkKafkaPartitioner<RowData>, DeliveryGuarantee, Integer, String) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createKafkaTableSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<KafkaTopicPartition, Long>, long, BoundedMode, Map<KafkaTopicPartition, Long>, long, String) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- createObjectMapper() - Static method in class org.apache.flink.connector.kafka.util.JacksonMapperFactory
- createObjectMapper(JsonFactory) - Static method in class org.apache.flink.connector.kafka.util.JacksonMapperFactory
- createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.
- createPartitionDiscoverer(KafkaTopicsDescriptor, int, int) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Creates the partition discoverer that is used to find new partitions for this subtask.
- createProducer() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- createReader(SourceReaderContext) - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- createRuntimeDecoder(DynamicTableSource.Context, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- createRuntimeEncoder(DynamicTableSink.Context, DataType) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- createWriter(Sink.InitContext) - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- CURRENT_OFFSET_METRIC_GAUGE - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- CURRENT_OFFSETS_METRICS_GAUGE - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
D
- DecodingFormatWrapper(DecodingFormat<DeserializationSchema<RowData>>) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- DEFAULT_KAFKA_PRODUCERS_POOL_SIZE - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Default number of KafkaProducers in the pool.
- DEFAULT_KAFKA_TRANSACTION_TIMEOUT - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Default value for kafka transaction timeout.
- DEFAULT_POLL_TIMEOUT - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not available.
- DefaultKafkaSinkContext - Class in org.apache.flink.connector.kafka.sink
-
Context providing information to assist constructing a
ProducerRecord. - DefaultKafkaSinkContext(int, int, Properties) - Constructor for class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- defaultTopicId - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.The name of the default topic this producer is writing data to.
- defaultTopicId - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
The name of the default topic this producer is writing data to.
- DELIVERY_GUARANTEE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- deserialize(byte[], byte[], String, int, long) - Method in interface org.apache.flink.streaming.util.serialization.KeyedDeserializationSchema
-
Deprecated.Deserializes the byte message.
- deserialize(int, byte[]) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- deserialize(int, byte[]) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- deserialize(DataInputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- deserialize(DataInputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- deserialize(DataInputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- deserialize(FlinkKafkaProducer.KafkaTransactionContext, DataInputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- deserialize(FlinkKafkaProducer.KafkaTransactionState, DataInputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- deserialize(FlinkKafkaProducer.NextTransactionalIdHint, DataInputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- deserialize(ConsumerRecord<byte[], byte[]>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
- deserialize(ConsumerRecord<byte[], byte[]>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElementDeserializer
- deserialize(ConsumerRecord<byte[], byte[]>) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
-
Deserializes the Kafka record.
- deserialize(ConsumerRecord<byte[], byte[]>) - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- deserialize(ConsumerRecord<byte[], byte[]>) - Method in interface org.apache.flink.streaming.util.serialization.KeyedDeserializationSchema
-
Deprecated.
- deserialize(ConsumerRecord<byte[], byte[]>) - Method in class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
- deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - Method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Deserializes the byte message.
- deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
- deserialize(ConsumerRecord<byte[], byte[]>, Collector<T>) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
-
Deserializes the Kafka record.
- deserializer - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
The schema to convert between Kafka's byte messages, and Flink's objects.
- DISABLED - org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
Completely disable offset committing.
- DISABLED - Static variable in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- disableFilterRestoredPartitionsWithSubscribedTopics() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
By default, when restoring from a checkpoint / savepoint, the consumer always ignores restored partitions that are no longer associated with the current specified topics or topic pattern to subscribe to.
- discoverPartitions() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Execute a partition discovery attempt for this subtask.
- doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
- doCommitInternalOffsetsToKafka(Map<KafkaTopicPartition, Long>, KafkaCommitCallback) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
- dropLeaderData(List<KafkaTopicPartitionLeader>) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
E
- earliest() - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets to the earliest available offsets of each partition. - EARLIEST - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from the earliest offset possible.
- EARLIEST_OFFSET - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- EARLIEST_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- EARLIEST_OFFSET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from the earliest offset.
- emitRecord(ConsumerRecord<byte[], byte[]>, SourceOutput<T>, KafkaPartitionSplitState) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
- emitRecordsWithTimestamps(Queue<T>, KafkaTopicPartitionState<T, KPH>, long, long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Emits a record attaching a timestamp to it.
- emitWatermark(Watermark) - Method in class org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
- EncodingFormatWrapper(EncodingFormat<SerializationSchema<RowData>>) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- endOffsets(Collection<TopicPartition>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
List end offsets for the specified partitions.
- endOffsets(Collection<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- equals(Object) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
-
Deprecated.
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- equals(Object) - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- EXACTLY_ONCE - org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
Deprecated.Semantic.EXACTLY_ONCE the Flink producer will write all messages in a Kafka transaction that will be committed to Kafka on a checkpoint.
- EXCEPT_KEY - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
- ExceptionProxy - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A proxy that communicates exceptions between threads.
- ExceptionProxy(Thread) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
-
Creates an exception proxy that interrupts the given thread upon report of an exception.
- EXTERNAL_ERROR - org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
- extractTimestamp(T, long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- extractTimestamp(T, long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
F
- factoryIdentifier() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- factoryIdentifier() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- fetch() - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.
- fetchOffsetsWithTimestamp(Collection<KafkaTopicPartition>, long) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- finishRecoveringContext(Collection<FlinkKafkaProducer.KafkaTransactionState>) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- FlinkFixedPartitioner<T> - Class in org.apache.flink.streaming.connectors.kafka.partitioner
-
A partitioner ensuring that each internal Flink partition ends up in one Kafka partition.
- FlinkFixedPartitioner() - Constructor for class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- FlinkKafkaConsumer<T> - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.
- FlinkKafkaConsumer(String, DeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(String, KafkaDeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(List<String>, DeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(List<String>, KafkaDeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(Pattern, DeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumer(Pattern, KafkaDeserializationSchema<T>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Creates a new Kafka streaming source consumer.
- FlinkKafkaConsumerBase<T> - Class in org.apache.flink.streaming.connectors.kafka
-
Base class of all Flink Kafka Consumer data sources.
- FlinkKafkaConsumerBase(List<String>, Pattern, KafkaDeserializationSchema<T>, long, boolean) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Base constructor.
- FlinkKafkaErrorCode - Enum in org.apache.flink.streaming.connectors.kafka
-
Error codes used in
FlinkKafkaException. - FlinkKafkaException - Exception in org.apache.flink.streaming.connectors.kafka
-
Exception used by
FlinkKafkaProducerandFlinkKafkaConsumer. - FlinkKafkaException(FlinkKafkaErrorCode, String) - Constructor for exception org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
- FlinkKafkaException(FlinkKafkaErrorCode, String, Throwable) - Constructor for exception org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
- FlinkKafkaInternalProducer<K,V> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Internal flink kafka producer.
- FlinkKafkaInternalProducer(Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- flinkKafkaPartitioner - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
User-provided partitioner for assigning an object to a Kafka partition for each topic.
- FlinkKafkaPartitioner<T> - Class in org.apache.flink.streaming.connectors.kafka.partitioner
-
A
FlinkKafkaPartitionerwraps logic on how to partition records across partitions of multiple Kafka topics. - FlinkKafkaPartitioner() - Constructor for class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
- FlinkKafkaProducer<IN> - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Please use
KafkaSink. - FlinkKafkaProducer(String, String, SerializationSchema<IN>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, String, KeyedSerializationSchema<IN>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
- FlinkKafkaProducer(String, SerializationSchema<IN>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, SerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, SerializationSchema<IN>, Properties, FlinkKafkaPartitioner<IN>, FlinkKafkaProducer.Semantic, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, KafkaSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Creates a
FlinkKafkaProducerfor a given topic. - FlinkKafkaProducer(String, KafkaSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Creates a FlinkKafkaProducer for a given topic.
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, Optional<FlinkKafkaPartitioner<IN>>, FlinkKafkaProducer.Semantic, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
- FlinkKafkaProducer(String, KeyedSerializationSchema<IN>, Properties, FlinkKafkaProducer.Semantic) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
- FlinkKafkaProducer.ContextStateSerializer - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.
TypeSerializerforFlinkKafkaProducer.KafkaTransactionContext. - FlinkKafkaProducer.ContextStateSerializer.ContextStateSerializerSnapshot - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Serializer configuration snapshot for compatibility and format evolution.
- FlinkKafkaProducer.KafkaTransactionContext - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Context associated to this instance of the
FlinkKafkaProducer. - FlinkKafkaProducer.KafkaTransactionState - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.State for handling transactions.
- FlinkKafkaProducer.NextTransactionalIdHint - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Keep information required to deduce next safe to use transactional id.
- FlinkKafkaProducer.NextTransactionalIdHintSerializer - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.
TypeSerializerforFlinkKafkaProducer.NextTransactionalIdHint. - FlinkKafkaProducer.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Serializer configuration snapshot for compatibility and format evolution.
- FlinkKafkaProducer.Semantic - Enum in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Semantics that can be chosen.
- FlinkKafkaProducer.TransactionStateSerializer - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.
TypeSerializerforFlinkKafkaProducer.KafkaTransactionState. - FlinkKafkaProducer.TransactionStateSerializer.TransactionStateSerializerSnapshot - Class in org.apache.flink.streaming.connectors.kafka
-
Deprecated.Serializer configuration snapshot for compatibility and format evolution.
- FlinkKafkaProducer011 - Class in org.apache.flink.streaming.connectors.kafka
-
Compatibility class to make migration possible from the 0.11 connector to the universal one.
- FlinkKafkaProducer011() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011
- FlinkKafkaProducer011.ContextStateSerializer - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducer011.ContextStateSerializer.ContextStateSerializerSnapshot - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducer011.NextTransactionalIdHint - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducer011.NextTransactionalIdHintSerializer - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducer011.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducer011.TransactionStateSerializer - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducer011.TransactionStateSerializer.TransactionStateSerializerSnapshot - Class in org.apache.flink.streaming.connectors.kafka
- FlinkKafkaProducerBase<IN> - Class in org.apache.flink.streaming.connectors.kafka
-
Flink Sink to produce data into a Kafka topic.
- FlinkKafkaProducerBase(String, KeyedSerializationSchema<IN>, Properties, FlinkKafkaPartitioner<IN>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
The main constructor for creating a FlinkKafkaProducer.
- FlinkKafkaShuffle - Class in org.apache.flink.streaming.connectors.kafka.shuffle
-
FlinkKafkaShuffleuses Kafka as a message bus to shuffle and persist data at the same time. - FlinkKafkaShuffle() - Constructor for class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
- FlinkKafkaShuffleConsumer<T> - Class in org.apache.flink.streaming.connectors.kafka.shuffle
-
Flink Kafka Shuffle Consumer Function.
- FlinkKafkaShuffleProducer<IN,KEY> - Class in org.apache.flink.streaming.connectors.kafka.shuffle
-
Flink Kafka Shuffle Producer Function.
- FlinkKafkaShuffleProducer.KafkaSerializer<IN> - Class in org.apache.flink.streaming.connectors.kafka.shuffle
-
Flink Kafka Shuffle Serializer.
- flush() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Flush pending records.
- flush() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- flushMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Sink buffer flush config which only supported in upsert mode now.
- flushOnCheckpoint - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
If true, the producer will wait until all outstanding records have been send to the broker.
- forwardOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- fromConfiguration(boolean, boolean, boolean) - Static method in class org.apache.flink.streaming.connectors.kafka.config.OffsetCommitModes
-
Determine the offset commit mode using several configuration values.
G
- generateIdsToAbort() - Method in class org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
-
If we have to abort previous transactional id in case of restart after a failure BEFORE first checkpoint completed, we don't know what was the parallelism used in previous attempt.
- generateIdsToUse(long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
-
Range of available transactional ids to use is: [nextFreeTransactionalId, nextFreeTransactionalId + parallelism * kafkaProducersPoolSize) loop below picks in a deterministic way a subrange of those available transactional ids based on index of this subtask.
- getAllPartitionsForTopics(List<String>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Fetch the list of all partitions for a specific topics list from Kafka.
- getAllPartitionsForTopics(List<String>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
- getAllTopics() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Fetch the list of all topics from Kafka.
- getAllTopics() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
- getAutoOffsetResetStrategy() - Method in class org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
- getAutoOffsetResetStrategy() - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get the auto offset reset strategy in case the initialized offsets falls out of the range.
- getBatchBlocking() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Gets all the elements found in the list, or blocks until at least one element was added.
- getBatchBlocking(long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Gets all the elements found in the list, or blocks until at least one element was added.
- getBatchIntervalMs() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- getBatchSize() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- getBoundedness() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getChangelogMode() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- getChangelogMode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- getChangelogMode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- getChangelogMode(ChangelogMode) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- getCommittableSerializer() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- getCommittedOffset() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- getCurrentOffset() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
- getDescription() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- getDescription() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- getElementBlocking() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the next element in the queue.
- getElementBlocking(long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the next element in the queue.
- getEnableCommitOnCheckpoints() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- getEnum(String) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- getEnumeratorCheckpointSerializer() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getEpoch() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- getErrorCode() - Method in exception org.apache.flink.streaming.connectors.kafka.FlinkKafkaException
- getFetcherName() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
-
Gets the name of this fetcher, for thread naming and logging purposes.
- getFetcherName() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
- getField(Object, String) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
Gets and returns the field
fieldNamefrom the given Objectobjectusing reflection. - getFixedTopics() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
- getIsAutoCommitEnabled() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.
- getIsAutoCommitEnabled() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- getKafkaMetric(Map<MetricName, ? extends Metric>, String, String) - Static method in class org.apache.flink.connector.kafka.MetricUtil
-
Tries to find the Kafka
Metricin the provided metrics. - getKafkaMetric(Map<MetricName, ? extends Metric>, Predicate<Map.Entry<MetricName, ? extends Metric>>) - Static method in class org.apache.flink.connector.kafka.MetricUtil
-
Tries to find the Kafka
Metricin the provided metrics matching a given filter. - getKafkaPartitionHandle() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
Gets Kafka's descriptor for the Kafka Partition.
- getKafkaProducer(Properties) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Used for testing only.
- getKafkaProducerConfig() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- getKafkaTopicPartition() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
Gets Flink's descriptor for the Kafka Partition.
- getLeader() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
- getLength() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- getLength() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- getLength() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- getNumberOfParallelInstances() - Method in class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- getNumberOfParallelInstances() - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
- getOffset() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
-
The current offset in the partition.
- getOption(Properties, ConfigOption<?>, Function<String, T>) - Static method in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- getParallelInstanceId() - Method in class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- getParallelInstanceId() - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
Get the ID of the subtask the KafkaSink is running on.
- getPartition() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getPartition() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- getPartition() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - Method in class org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
- getPartitionOffsets(Collection<TopicPartition>, OffsetsInitializer.PartitionOffsetsRetriever) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get the initial offsets for the given Kafka partitions.
- getPartitionsByTopic(String, KafkaProducer<byte[], byte[]>) - Static method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
- getPartitionsByTopic(String, Producer<byte[], byte[]>) - Static method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- getPartitionSetSubscriber(Set<TopicPartition>) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
- getPartitionsForTopic(String) - Method in class org.apache.flink.connector.kafka.sink.DefaultKafkaSinkContext
- getPartitionsForTopic(String) - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema.KafkaSinkContext
-
For a given topic id retrieve the available partitions.
- getProducedType() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getProducedType() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- getProducedType() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
- getProducedType() - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- getProducedType() - Method in class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
- getProducer() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- getProducerId() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- getPropertiesFromBrokerList(String) - Static method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
- getScanRuntimeProvider(ScanTableSource.ScanContext) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- getSerializationSchema() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
- getSinkRuntimeProvider(DynamicTableSink.Context) - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- getSplitSerializer() - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- getStartingOffset() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getStateSentinel() - Method in enum org.apache.flink.streaming.connectors.kafka.config.StartupMode
- getStoppingOffset() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getSubscribedTopicPartitions(AdminClient) - Method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
-
Get a set of subscribed
TopicPartitions. - getSubtask() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleWatermark
- getTargetTopic(Tuple2<K, V>) - Method in class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
- getTargetTopic(T) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- getTargetTopic(T) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
- getTargetTopic(T) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Returns the topic that the presented element should be sent to.
- getTargetTopic(T) - Method in interface org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
-
Deprecated.Optional method to determine the target topic for the element.
- getTimestamp() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleRecord
- getTopic() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getTopic() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- getTopic() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- getTopicListSubscriber(List<String>) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
- getTopicPartition() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- getTopicPartition() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
- getTopicPatternSubscriber(Pattern) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriber
- getTransactionalId() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- getTransactionCoordinatorId() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- getTransactionTimeout(Properties) - Static method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- getValue() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleRecord
- getValue() - Method in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
- getValue() - Method in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper
- getVersion() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- getVersion() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- getWatermark() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleWatermark
- getWriterStateSerializer() - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- GROUP_OFFSET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from its committed group offset in Kafka.
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from committed offsets in ZK / Kafka brokers of a specific consumer group.
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from committed offsets in ZK / Kafka brokers of a specific consumer group (default).
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- GROUP_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
H
- handleSplitRequest(int, String) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- handleSplitsChanges(SplitsChange<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- Handover - Class in org.apache.flink.streaming.connectors.kafka.internals
-
The Handover is a utility to hand over data (a buffer of records) and exception from a producer thread to a consumer thread.
- Handover() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.Handover
- Handover.ClosedException - Exception in org.apache.flink.streaming.connectors.kafka.internals
-
An exception thrown by the Handover in the
Handover.pollNext()orHandover.produce(ConsumerRecords)method, after the Handover was closed viaHandover.close(). - Handover.WakeupException - Exception in org.apache.flink.streaming.connectors.kafka.internals
-
A special exception thrown bv the Handover in the
Handover.produce(ConsumerRecords)method when the producer is woken up from a blocking call viaHandover.wakeupProducer(). - hashCode() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
-
Deprecated.
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.DecodingFormatWrapper
- hashCode() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
I
- IDENTIFIER - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- IDENTIFIER - Static variable in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- ignoreFailuresAfterTransactionTimeout() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Disables the propagation of exceptions thrown when committing presumably timed out Kafka transactions during recovery of the job.
- INITIAL_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- initializeConnections() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Establish the required connections in order to fetch topics and partitions metadata.
- initializeConnections() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
- initializedState(KafkaPartitionSplit) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- initializeState(FunctionInitializationContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- initializeState(FunctionInitializationContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- initializeState(FunctionInitializationContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
- initializeUserContext() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- initTransactions() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- invoke(IN, SinkFunction.Context) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Called when new data arrives to the sink, and forwards it to Kafka.
- invoke(Object, String, Object...) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- invoke(Watermark) - Method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer
-
This is the function invoked to handle each watermark.
- invoke(FlinkKafkaProducer.KafkaTransactionState, IN, SinkFunction.Context) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- invoke(FlinkKafkaProducer.KafkaTransactionState, IN, SinkFunction.Context) - Method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer
-
This is the function invoked to handle each element.
- isEmpty() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Checks whether the queue is empty (has no elements).
- isEnabled() - Method in class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- isEndOfStream(ObjectNode) - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- isEndOfStream(Tuple2<K, V>) - Method in class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
This schema never considers an element to signal end-of-stream, so this method returns always false.
- isEndOfStream(T) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
- isEndOfStream(T) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
-
Method to decide whether the element signals the end of the stream.
- isFixedTopics() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
- isImmutableType() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- isImmutableType() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- isImmutableType() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- isMatchingTopic(String) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
-
Check if the input topic matches the topics described by this KafkaTopicDescriptor.
- isOffsetDefined() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- isOpen() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Checks whether the queue is currently open, meaning elements can be added and polled.
- isRecord() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
- isSentinel(long) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
- isTopicPattern() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
- isWatermark() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
J
- JacksonMapperFactory - Class in org.apache.flink.connector.kafka.util
-
Factory for Jackson mappers.
- JSONKeyValueDeserializationSchema - Class in org.apache.flink.streaming.util.serialization
-
DeserializationSchema that deserializes a JSON String into an ObjectNode.
- JSONKeyValueDeserializationSchema(boolean) - Constructor for class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
K
- KAFKA_CONSUMER_METRIC_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- KAFKA_CONSUMER_METRICS_GROUP - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- KAFKA_PERIODIC - org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
Commit offsets periodically back to Kafka, using the auto commit functionality of internal Kafka clients.
- KAFKA_SOURCE_READER_METRIC_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- KafkaCommitCallback - Interface in org.apache.flink.streaming.connectors.kafka.internals
-
A callback interface that the source operator can implement to trigger custom actions when a commit request completes, which should normally be triggered from checkpoint complete event.
- KafkaConnectorOptions - Class in org.apache.flink.streaming.connectors.kafka.table
-
Options for the Kafka connector.
- KafkaConnectorOptions.ScanBoundedMode - Enum in org.apache.flink.streaming.connectors.kafka.table
-
Bounded mode for the Kafka consumer, see
KafkaConnectorOptions.SCAN_BOUNDED_MODE. - KafkaConnectorOptions.ScanStartupMode - Enum in org.apache.flink.streaming.connectors.kafka.table
-
Startup mode for the Kafka consumer, see
KafkaConnectorOptions.SCAN_STARTUP_MODE. - KafkaConnectorOptions.ValueFieldsStrategy - Enum in org.apache.flink.streaming.connectors.kafka.table
-
Strategies to derive the data type of a value format by considering a key format.
- KafkaConsumerMetricConstants - Class in org.apache.flink.streaming.connectors.kafka.internals.metrics
-
A collection of Kafka consumer metrics related constant strings.
- KafkaConsumerMetricConstants() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- KafkaConsumerThread<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
The thread the runs the
KafkaConsumer, connecting to the brokers and polling records. - KafkaConsumerThread(Logger, Handover, Properties, ClosableBlockingQueue<KafkaTopicPartitionState<T, TopicPartition>>, String, long, boolean, MetricGroup, MetricGroup) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
- KafkaContextAware<T> - Interface in org.apache.flink.streaming.connectors.kafka
-
An interface for
KafkaSerializationSchemasthat need information about the context where the Kafka Producer is running along with information about the available partitions. - KafkaDeserializationSchema<T> - Interface in org.apache.flink.streaming.connectors.kafka
-
The deserialization schema describes how to turn the Kafka ConsumerRecords into data types (Java/Scala objects) that are processed by Flink.
- KafkaDeserializationSchemaWrapper<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A simple wrapper for using the DeserializationSchema with the KafkaDeserializationSchema interface.
- KafkaDeserializationSchemaWrapper(DeserializationSchema<T>) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
- KafkaDynamicSink - Class in org.apache.flink.streaming.connectors.kafka.table
-
A version-agnostic Kafka
DynamicTableSink. - KafkaDynamicSink(DataType, DataType, EncodingFormat<SerializationSchema<RowData>>, EncodingFormat<SerializationSchema<RowData>>, int[], int[], String, String, Properties, FlinkKafkaPartitioner<RowData>, DeliveryGuarantee, boolean, SinkBufferFlushMode, Integer, String) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- KafkaDynamicSource - Class in org.apache.flink.streaming.connectors.kafka.table
-
A version-agnostic Kafka
ScanTableSource. - KafkaDynamicSource(DataType, DecodingFormat<DeserializationSchema<RowData>>, DecodingFormat<DeserializationSchema<RowData>>, int[], int[], String, List<String>, Pattern, Properties, StartupMode, Map<KafkaTopicPartition, Long>, long, BoundedMode, Map<KafkaTopicPartition, Long>, long, boolean, String) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- KafkaDynamicTableFactory - Class in org.apache.flink.streaming.connectors.kafka.table
-
Factory for creating configured instances of
KafkaDynamicSourceandKafkaDynamicSink. - KafkaDynamicTableFactory() - Constructor for class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- KafkaFetcher<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A fetcher that fetches data from Kafka brokers via the Kafka consumer API.
- KafkaFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
- KafkaMetricMutableWrapper - Class in org.apache.flink.streaming.connectors.kafka.internals.metrics
-
Gauge for getting the current value of a Kafka metric.
- KafkaMetricMutableWrapper(Metric) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
- KafkaMetricWrapper - Class in org.apache.flink.streaming.connectors.kafka.internals.metrics
-
Gauge for getting the current value of a Kafka metric.
- KafkaMetricWrapper(Metric) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricWrapper
- KafkaPartitionDiscoverer - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A partition discoverer that can be used to discover topics and partitions metadata from Kafka brokers via the Kafka high-level consumer API.
- KafkaPartitionDiscoverer(KafkaTopicsDescriptor, int, int, Properties) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
- KafkaPartitionSplit - Class in org.apache.flink.connector.kafka.source.split
-
A
SourceSplitfor a Kafka partition. - KafkaPartitionSplit(TopicPartition, long) - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- KafkaPartitionSplit(TopicPartition, long, long) - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- KafkaPartitionSplitReader - Class in org.apache.flink.connector.kafka.source.reader
-
A
SplitReaderimplementation that reads records from Kafka partitions. - KafkaPartitionSplitReader(Properties, SourceReaderContext, KafkaSourceReaderMetrics) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- KafkaPartitionSplitSerializer - Class in org.apache.flink.connector.kafka.source.split
-
The
serializerforKafkaPartitionSplit. - KafkaPartitionSplitSerializer() - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- KafkaPartitionSplitState - Class in org.apache.flink.connector.kafka.source.split
-
This class extends KafkaPartitionSplit to track a mutable current offset.
- KafkaPartitionSplitState(KafkaPartitionSplit) - Constructor for class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
- kafkaProducer - Variable in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- KafkaRecordDeserializationSchema<T> - Interface in org.apache.flink.connector.kafka.source.reader.deserializer
-
An interface for the deserialization of Kafka records.
- KafkaRecordEmitter<T> - Class in org.apache.flink.connector.kafka.source.reader
-
The
RecordEmitterimplementation forKafkaSourceReader. - KafkaRecordEmitter(KafkaRecordDeserializationSchema<T>) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter
- KafkaRecordSerializationSchema<T> - Interface in org.apache.flink.connector.kafka.sink
-
A serialization schema which defines how to convert a value of type
TtoProducerRecord. - KafkaRecordSerializationSchema.KafkaSinkContext - Interface in org.apache.flink.connector.kafka.sink
-
Context providing information of the kafka record target location.
- KafkaRecordSerializationSchemaBuilder<IN> - Class in org.apache.flink.connector.kafka.sink
-
Builder to construct
KafkaRecordSerializationSchema. - KafkaRecordSerializationSchemaBuilder() - Constructor for class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
- KafkaSerializationSchema<T> - Interface in org.apache.flink.streaming.connectors.kafka
- KafkaSerializationSchemaWrapper<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
An adapter from old style interfaces such as
SerializationSchema,FlinkKafkaPartitionerto theKafkaSerializationSchema. - KafkaSerializationSchemaWrapper(String, FlinkKafkaPartitioner<T>, boolean, SerializationSchema<T>) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- KafkaShuffleElement() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElement
- KafkaShuffleElementDeserializer(TypeSerializer<T>) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher.KafkaShuffleElementDeserializer
- KafkaShuffleFetcher<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Fetch data from Kafka for Kafka Shuffle.
- KafkaShuffleFetcher(SourceFunction.SourceContext<T>, Map<KafkaTopicPartition, Long>, SerializedValue<WatermarkStrategy<T>>, ProcessingTimeService, long, ClassLoader, String, KafkaDeserializationSchema<T>, Properties, long, MetricGroup, MetricGroup, boolean, TypeSerializer<T>, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
- KafkaShuffleFetcher.KafkaShuffleElement - Class in org.apache.flink.streaming.connectors.kafka.internals
-
An element in a KafkaShuffle.
- KafkaShuffleFetcher.KafkaShuffleElementDeserializer<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Deserializer for KafkaShuffleElement.
- KafkaShuffleFetcher.KafkaShuffleRecord<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
One value with Type T in a KafkaShuffle.
- KafkaShuffleFetcher.KafkaShuffleWatermark - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A watermark element in a KafkaShuffle.
- KafkaSink<IN> - Class in org.apache.flink.connector.kafka.sink
-
Flink Sink to produce data into a Kafka topic.
- KafkaSinkBuilder<IN> - Class in org.apache.flink.connector.kafka.sink
-
Builder to construct
KafkaSink. - KafkaSource<OUT> - Class in org.apache.flink.connector.kafka.source
-
The Source implementation of Kafka.
- KafkaSourceBuilder<OUT> - Class in org.apache.flink.connector.kafka.source
-
The builder class for
KafkaSourceto make it easier for the users to construct aKafkaSource. - KafkaSourceEnumerator - Class in org.apache.flink.connector.kafka.source.enumerator
-
The enumerator class for Kafka source.
- KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- KafkaSourceEnumerator(KafkaSubscriber, OffsetsInitializer, OffsetsInitializer, Properties, SplitEnumeratorContext<KafkaPartitionSplit>, Boundedness, Set<TopicPartition>) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl - Class in org.apache.flink.connector.kafka.source.enumerator
-
The implementation for offsets retriever with a consumer and an admin client.
- KafkaSourceEnumState - Class in org.apache.flink.connector.kafka.source.enumerator
-
The state of Kafka source enumerator.
- KafkaSourceEnumStateSerializer - Class in org.apache.flink.connector.kafka.source.enumerator
-
The
Serializerfor the enumerator state of Kafka source. - KafkaSourceEnumStateSerializer() - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- KafkaSourceFetcherManager - Class in org.apache.flink.connector.kafka.source.reader.fetcher
-
The SplitFetcherManager for Kafka source.
- KafkaSourceFetcherManager(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, Supplier<SplitReader<ConsumerRecord<byte[], byte[]>, KafkaPartitionSplit>>, Consumer<Collection<String>>) - Constructor for class org.apache.flink.connector.kafka.source.reader.fetcher.KafkaSourceFetcherManager
-
Creates a new SplitFetcherManager with a single I/O threads.
- KafkaSourceOptions - Class in org.apache.flink.connector.kafka.source
-
Configurations for KafkaSource.
- KafkaSourceOptions() - Constructor for class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- KafkaSourceReader<T> - Class in org.apache.flink.connector.kafka.source.reader
-
The source reader for Kafka partitions.
- KafkaSourceReader(FutureCompletingBlockingQueue<RecordsWithSplitIds<ConsumerRecord<byte[], byte[]>>>, KafkaSourceFetcherManager, RecordEmitter<ConsumerRecord<byte[], byte[]>, T, KafkaPartitionSplitState>, Configuration, SourceReaderContext, KafkaSourceReaderMetrics) - Constructor for class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- KafkaSourceReaderMetrics - Class in org.apache.flink.connector.kafka.source.metrics
-
A collection class for handling metrics in
KafkaSourceReader. - KafkaSourceReaderMetrics(SourceReaderMetricGroup) - Constructor for class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- KafkaSubscriber - Interface in org.apache.flink.connector.kafka.source.enumerator.subscriber
-
Kafka consumer allows a few different ways to consume from the topics, including: Subscribe from a collection of topics.
- KafkaTopicPartition - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Flink's description of a partition in a Kafka topic.
- KafkaTopicPartition(String, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- KafkaTopicPartition.Comparator - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A
ComparatorforKafkaTopicPartitions. - KafkaTopicPartitionAssigner - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Utility for assigning Kafka partitions to consumer subtasks.
- KafkaTopicPartitionAssigner() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionAssigner
- KafkaTopicPartitionLeader - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Serializable Topic Partition info with leader Node information.
- KafkaTopicPartitionLeader(KafkaTopicPartition, Node) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
- KafkaTopicPartitionState<T,KPH> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
The state that the Flink Kafka Consumer holds for each Kafka partition.
- KafkaTopicPartitionState(KafkaTopicPartition, KPH) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- KafkaTopicPartitionStateSentinel - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Magic values used to represent special offset states before partitions are actually read.
- KafkaTopicPartitionStateSentinel() - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
- KafkaTopicPartitionStateWithWatermarkGenerator<T,KPH> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A special version of the per-kafka-partition-state that additionally holds a
TimestampAssigner,WatermarkGenerator, an immediateWatermarkOutput, and a deferredWatermarkOutputfor this partition. - KafkaTopicPartitionStateWithWatermarkGenerator(KafkaTopicPartition, KPH, TimestampAssigner<T>, WatermarkGenerator<T>, WatermarkOutput, WatermarkOutput) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
- KafkaTopicsDescriptor - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A Kafka Topics Descriptor describes how the consumer subscribes to Kafka topics - either a fixed list of topics, or a topic pattern.
- KafkaTopicsDescriptor(List<String>, Pattern) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
- KafkaTransactionContext(Set<String>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionContext
-
Deprecated.
- KafkaTransactionState(String, long, short, FlinkKafkaInternalProducer<byte[], byte[]>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- KafkaTransactionState(String, FlinkKafkaInternalProducer<byte[], byte[]>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- KafkaTransactionState(FlinkKafkaInternalProducer<byte[], byte[]>) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- KEY_DISABLE_METRICS - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Boolean configuration key to disable metrics tracking.
- KEY_DISABLE_METRICS - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Configuration key for disabling the metrics reporting.
- KEY_DISABLE_METRICS - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Configuration key for disabling the metrics reporting.
- KEY_FIELDS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- KEY_FIELDS_PREFIX - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- KEY_FORMAT - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- KEY_PARTITION_DISCOVERY_INTERVAL_MILLIS - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Configuration key to define the consumer's partition discovery interval, in milliseconds.
- KEY_POLL_TIMEOUT - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.Configuration key to change the polling timeout.
- keyDecodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Optional format for decoding keys from Kafka.
- KeyedDeserializationSchema<T> - Interface in org.apache.flink.streaming.util.serialization
-
Deprecated.
- KeyedSerializationSchema<T> - Interface in org.apache.flink.streaming.util.serialization
-
Deprecated.
- KeyedSerializationSchemaWrapper<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A simple wrapper for using the SerializationSchema with the KeyedSerializationSchema interface.
- KeyedSerializationSchemaWrapper(SerializationSchema<T>) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
- keyEncodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Optional format for encoding keys to Kafka.
- keyPrefix - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Prefix that needs to be removed from fields when constructing the physical data type.
- keyPrefix - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Prefix that needs to be removed from fields when constructing the physical data type.
- keyProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Indices that determine the key fields and the source position in the consumed row.
- keyProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Indices that determine the key fields and the target position in the produced row.
L
- lastParallelism - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- latest() - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets to the latest offsets of each partition. - LATEST - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from the latest offset.
- LATEST - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from the latest offset.
- LATEST_OFFSET - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- LATEST_OFFSET - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- LATEST_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
-
Deprecated.
- LATEST_OFFSET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines the partition should start from the latest offset.
- LEGACY_COMMITTED_OFFSETS_METRICS_GROUP - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- LEGACY_CURRENT_OFFSETS_METRICS_GROUP - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- listReadableMetadata() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- listWritableMetadata() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
- LOG - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- logFailuresOnly - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Flag indicating whether to accept failures (and log them), or to fail on failures.
M
- markActive() - Method in class org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
- markIdle() - Method in class org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
- MAX_NUM_PENDING_CHECKPOINTS - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
The maximum number of pending non-committed checkpoints to track, to avoid memory leaks.
- maybeAddRecordsLagMetric(KafkaConsumer<?, ?>, TopicPartition) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Add a partition's records-lag metric to tracking list if this partition never appears before.
- metadataKeys - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Metadata that is appended at the end of a physical sink row.
- metadataKeys - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Metadata that is appended at the end of a physical source row.
- metrics() - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- MetricUtil - Class in org.apache.flink.connector.kafka
-
Collection of methods to interact with Kafka's client metric system.
- MetricUtil() - Constructor for class org.apache.flink.connector.kafka.MetricUtil
N
- nextFreeTransactionalId - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- NextTransactionalIdHint() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- NextTransactionalIdHint() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.NextTransactionalIdHint
- NextTransactionalIdHint(int, long) - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- NextTransactionalIdHintSerializer() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- NextTransactionalIdHintSerializer() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.NextTransactionalIdHintSerializer
- NextTransactionalIdHintSerializerSnapshot() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot
-
Deprecated.
- NextTransactionalIdHintSerializerSnapshot() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.NextTransactionalIdHintSerializer.NextTransactionalIdHintSerializerSnapshot
- NO_STOPPING_OFFSET - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- NONE - org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
Deprecated.Semantic.NONE means that nothing will be guaranteed.
- NoStoppingOffsetsInitializer - Class in org.apache.flink.connector.kafka.source.enumerator.initializer
-
An implementation of
OffsetsInitializerwhich does not initialize anything. - NoStoppingOffsetsInitializer() - Constructor for class org.apache.flink.connector.kafka.source.enumerator.initializer.NoStoppingOffsetsInitializer
- notifyCheckpointAborted(long) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- notifyCheckpointComplete(long) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- notifyCheckpointComplete(long) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- notifyCheckpointComplete(Map<TopicPartition, OffsetAndMetadata>, OffsetCommitCallback) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- numPendingRecords() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
O
- of(KafkaDeserializationSchema<V>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a legacy
KafkaDeserializationSchemaas the deserializer of theConsumerRecords. - OFFSET_NOT_SET - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateSentinel
-
Magic number that defines an unset offset.
- OffsetCommitMode - Enum in org.apache.flink.streaming.connectors.kafka.config
-
The offset commit mode represents the behaviour of how offsets are externally committed back to Kafka brokers / Zookeeper.
- OffsetCommitModes - Class in org.apache.flink.streaming.connectors.kafka.config
-
Utilities for
OffsetCommitMode. - OffsetCommitModes() - Constructor for class org.apache.flink.streaming.connectors.kafka.config.OffsetCommitModes
- offsets(Map<TopicPartition, Long>) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets to the specified offsets. - offsets(Map<TopicPartition, Long>, OffsetResetStrategy) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets to the specified offsets. - OFFSETS_BY_PARTITION_METRICS_GROUP - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- OFFSETS_BY_TOPIC_METRICS_GROUP - Static variable in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaConsumerMetricConstants
- offsetsForTimes(Map<TopicPartition, Long>) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer.PartitionOffsetsRetriever
-
List offsets matching a timestamp for the specified partitions.
- offsetsForTimes(Map<TopicPartition, Long>) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- OffsetsInitializer - Interface in org.apache.flink.connector.kafka.source.enumerator.initializer
-
An interface for users to specify the starting / stopping offset of a
KafkaPartitionSplit. - OffsetsInitializer.PartitionOffsetsRetriever - Interface in org.apache.flink.connector.kafka.source.enumerator.initializer
-
An interface that provides necessary information to the
OffsetsInitializerto get the initial offsets of the Kafka partitions. - OffsetsInitializerValidator - Interface in org.apache.flink.connector.kafka.source.enumerator.initializer
-
Interface for validating
OffsetsInitializerwith properties fromKafkaSource. - ON_CHECKPOINTS - org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
Commit offsets back to Kafka only when checkpoints are completed.
- onEvent(T, long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- onEvent(T, long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
- onException(Throwable) - Method in interface org.apache.flink.streaming.connectors.kafka.internals.KafkaCommitCallback
-
A callback method the user can implement to provide asynchronous handling of commit request failure.
- onPeriodicEmit() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- onPeriodicEmit() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
- onSplitFinished(Map<String, KafkaPartitionSplitState>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- onSuccess() - Method in interface org.apache.flink.streaming.connectors.kafka.internals.KafkaCommitCallback
-
A callback method the user can implement to provide asynchronous handling of commit request completion.
- open() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Opens the partition discoverer, initializing all required Kafka connections.
- open(int, int) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- open(int, int) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
-
Initializer for the partitioner.
- open(DeserializationSchema.InitializationContext) - Method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Initialization method for the schema.
- open(DeserializationSchema.InitializationContext) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaDeserializationSchemaWrapper
- open(DeserializationSchema.InitializationContext) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaDeserializationSchema
-
Initialization method for the schema.
- open(DeserializationSchema.InitializationContext) - Method in class org.apache.flink.streaming.util.serialization.JSONKeyValueDeserializationSchema
- open(SerializationSchema.InitializationContext) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- open(SerializationSchema.InitializationContext) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema
-
Initialization method for the schema.
- open(SerializationSchema.InitializationContext, KafkaRecordSerializationSchema.KafkaSinkContext) - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Initialization method for the schema.
- open(Configuration) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- open(Configuration) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Initializes the connection to Kafka.
- open(Configuration) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Initializes the connection to Kafka.
- optionalOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- optionalOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- org.apache.flink.connector.kafka - package org.apache.flink.connector.kafka
- org.apache.flink.connector.kafka.sink - package org.apache.flink.connector.kafka.sink
- org.apache.flink.connector.kafka.source - package org.apache.flink.connector.kafka.source
- org.apache.flink.connector.kafka.source.enumerator - package org.apache.flink.connector.kafka.source.enumerator
- org.apache.flink.connector.kafka.source.enumerator.initializer - package org.apache.flink.connector.kafka.source.enumerator.initializer
- org.apache.flink.connector.kafka.source.enumerator.subscriber - package org.apache.flink.connector.kafka.source.enumerator.subscriber
- org.apache.flink.connector.kafka.source.metrics - package org.apache.flink.connector.kafka.source.metrics
- org.apache.flink.connector.kafka.source.reader - package org.apache.flink.connector.kafka.source.reader
- org.apache.flink.connector.kafka.source.reader.deserializer - package org.apache.flink.connector.kafka.source.reader.deserializer
- org.apache.flink.connector.kafka.source.reader.fetcher - package org.apache.flink.connector.kafka.source.reader.fetcher
- org.apache.flink.connector.kafka.source.split - package org.apache.flink.connector.kafka.source.split
- org.apache.flink.connector.kafka.util - package org.apache.flink.connector.kafka.util
- org.apache.flink.streaming.connectors.kafka - package org.apache.flink.streaming.connectors.kafka
- org.apache.flink.streaming.connectors.kafka.config - package org.apache.flink.streaming.connectors.kafka.config
- org.apache.flink.streaming.connectors.kafka.internals - package org.apache.flink.streaming.connectors.kafka.internals
- org.apache.flink.streaming.connectors.kafka.internals.metrics - package org.apache.flink.streaming.connectors.kafka.internals.metrics
- org.apache.flink.streaming.connectors.kafka.partitioner - package org.apache.flink.streaming.connectors.kafka.partitioner
- org.apache.flink.streaming.connectors.kafka.shuffle - package org.apache.flink.streaming.connectors.kafka.shuffle
- org.apache.flink.streaming.connectors.kafka.table - package org.apache.flink.streaming.connectors.kafka.table
- org.apache.flink.streaming.util.serialization - package org.apache.flink.streaming.util.serialization
P
- parallelism - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Parallelism of the physical Kafka producer.
- partition(T, byte[], byte[], String, int[]) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkFixedPartitioner
- partition(T, byte[], byte[], String, int[]) - Method in class org.apache.flink.streaming.connectors.kafka.partitioner.FlinkKafkaPartitioner
-
Determine the id of the partition that the record should be written to.
- PARTITION_DISCOVERY_DISABLED - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
The default interval to execute partition discovery, in milliseconds (
Long.MIN_VALUE, i.e. - PARTITION_DISCOVERY_INTERVAL_MS - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- PARTITION_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- partitionConsumerRecordsHandler(List<ConsumerRecord<byte[], byte[]>>, KafkaTopicPartitionState<T, TopicPartition>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
- partitionConsumerRecordsHandler(List<ConsumerRecord<byte[], byte[]>>, KafkaTopicPartitionState<T, TopicPartition>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaShuffleFetcher
- partitioner - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Partitioner to select Kafka partition for each item.
- PartitionOffsetsRetrieverImpl(AdminClient, String) - Constructor for class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator.PartitionOffsetsRetrieverImpl
- partitionsFor(String) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- pauseOrResumeSplits(Collection<String>, Collection<String>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- pauseOrResumeSplits(Collection<KafkaPartitionSplit>, Collection<KafkaPartitionSplit>) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- peek() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the queue's next element without removing it, if the queue is non-empty.
- pendingRecords - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Number of unacknowledged records.
- pendingRecords - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Number of unacknowledged records.
- pendingRecordsLock - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Lock for accessing the pending records.
- persistentKeyBy(DataStream<T>, String, int, int, Properties, int...) - Static method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
-
Uses Kafka as a message bus to persist keyBy shuffle.
- persistentKeyBy(DataStream<T>, String, int, int, Properties, KeySelector<T, K>) - Static method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
-
Uses Kafka as a message bus to persist keyBy shuffle.
- physicalDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Data type to configure the formats.
- physicalDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Data type to configure the formats.
- poll() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns the queue's next element and removes it, the queue is non-empty.
- pollBatch() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Returns all of the queue's current elements in a list, if the queue is non-empty.
- pollNext() - Method in class org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Polls the next element from the Handover, possibly blocking until the next element is available.
- pollTimeout - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.From Kafka's Javadoc: The time, in milliseconds, spent waiting in poll if data is not available.
- preCommit(FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- produce(ConsumerRecords<byte[], byte[]>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Hands over an element from the producer.
- producedDataType - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Data type that describes the final output of the source.
- producer - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
KafkaProducer instance.
- producerConfig - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.User defined properties for the Producer.
- producerConfig - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
User defined properties for the Producer.
- PRODUCERS_POOL_EMPTY - org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
- properties - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
-
Deprecated.User-supplied properties for Kafka.
- properties - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Properties for the Kafka producer.
- properties - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Properties for the Kafka consumer.
- props - Variable in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
- PROPS_BOOTSTRAP_SERVERS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- PROPS_GROUP_ID - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
R
- readKeyBy(String, StreamExecutionEnvironment, TypeInformation<T>, Properties, KeySelector<T, K>) - Static method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
- recordCommittedOffset(TopicPartition, long) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Update the latest committed offset of the given
TopicPartition. - recordCurrentOffset(TopicPartition, long) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Update current consuming offset of the given
TopicPartition. - recordFailedCommit() - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Mark a failure commit.
- RECORDS_LAG - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- recordSucceededCommit() - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Mark a successful commit.
- recoverAndAbort(FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- recoverAndCommit(FlinkKafkaProducer.KafkaTransactionState) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- REGISTER_KAFKA_CONSUMER_METRICS - Static variable in class org.apache.flink.connector.kafka.source.KafkaSourceOptions
- registerKafkaConsumerMetrics(KafkaConsumer<?, ?>) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Register metrics of KafkaConsumer in Kafka metric group.
- registerNumBytesIn(KafkaConsumer<?, ?>) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Register
MetricNames.IO_NUM_BYTES_IN. - registerTopicPartition(TopicPartition) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Register metric groups for the given
TopicPartition. - removeRecordsLagMetric(TopicPartition) - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Remove a partition's records-lag metric from tracking list.
- reportError(Throwable) - Method in class org.apache.flink.streaming.connectors.kafka.internals.ExceptionProxy
-
Sets the exception and interrupts the target thread, if no other exception has occurred so far.
- reportError(Throwable) - Method in class org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Reports an exception.
- requiredOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicTableFactory
- requiredOptions() - Method in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- restoreEnumerator(SplitEnumeratorContext<KafkaPartitionSplit>, KafkaSourceEnumState) - Method in class org.apache.flink.connector.kafka.source.KafkaSource
- restoreWriter(Sink.InitContext, Collection<KafkaWriterState>) - Method in class org.apache.flink.connector.kafka.sink.KafkaSink
- resumeTransaction(long, short) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
Instead of obtaining producerId and epoch from the transaction coordinator, re-use previously obtained ones, so that we can resume transaction after a restart.
- run() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
- run(SourceFunction.SourceContext<T>) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- runFetchLoop() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
- runFetchLoop() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaFetcher
S
- SAFE_SCALE_DOWN_FACTOR - Static variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.This coefficient determines what is the safe scale down factor.
- SCAN_BOUNDED_MODE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_BOUNDED_SPECIFIC_OFFSETS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_BOUNDED_TIMESTAMP_MILLIS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_STARTUP_MODE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_STARTUP_SPECIFIC_OFFSETS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_STARTUP_TIMESTAMP_MILLIS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SCAN_TOPIC_PARTITION_DISCOVERY - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- schema - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
(Serializable) SerializationSchema for turning objects used with Flink into.
- semantic - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Semantic chosen for this instance.
- send(ProducerRecord<K, V>) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- send(ProducerRecord<K, V>, Callback) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, String) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- sendOffsetsToTransaction(Map<TopicPartition, OffsetAndMetadata>, ConsumerGroupMetadata) - Method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- serialize(KafkaSourceEnumState) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumStateSerializer
- serialize(KafkaPartitionSplit) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitSerializer
- serialize(FlinkKafkaProducer.KafkaTransactionContext, DataOutputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- serialize(FlinkKafkaProducer.KafkaTransactionState, DataOutputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- serialize(FlinkKafkaProducer.NextTransactionalIdHint, DataOutputView) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- serialize(T, Long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- serialize(T, Long) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaSerializationSchema
-
Serializes given element and returns it as a
ProducerRecord. - serialize(T, KafkaRecordSerializationSchema.KafkaSinkContext, Long) - Method in interface org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchema
-
Serializes given element and returns it as a
ProducerRecord. - serializeKey(Tuple2<K, V>) - Method in class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
- serializeKey(T) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
- serializeKey(T) - Method in interface org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
-
Deprecated.Serializes the key of the incoming element to a byte array This method might return null if no key is available.
- serializeValue(Tuple2<K, V>) - Method in class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
- serializeValue(T) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KeyedSerializationSchemaWrapper
- serializeValue(T) - Method in interface org.apache.flink.streaming.util.serialization.KeyedSerializationSchema
-
Deprecated.Serializes the value of the incoming element to a byte array.
- setAndCheckDiscoveredPartition(KafkaTopicPartition) - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Sets a partition as discovered.
- setBootstrapServers(String) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the Kafka bootstrap servers.
- setBootstrapServers(String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the bootstrap servers for the KafkaConsumer of the KafkaSource.
- setBounded(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
By default the KafkaSource is set to run as
Boundedness.CONTINUOUS_UNBOUNDEDand thus never stops until the Flink job fails or is canceled. - setClientIdPrefix(String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the client id prefix of this KafkaSource.
- setCommitOffsetsOnCheckpoints(boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies whether or not the consumer should commit offsets back to Kafka on checkpoints.
- setCommittedOffset(long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- setCurrentOffset(long) - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
- setDeliverGuarantee(DeliveryGuarantee) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Deprecated.Will be removed in future versions. Use
KafkaSinkBuilder.setDeliveryGuarantee(org.apache.flink.connector.base.DeliveryGuarantee)instead. - setDeliveryGuarantee(DeliveryGuarantee) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the wanted the
DeliveryGuarantee. - setDeserializer(KafkaRecordDeserializationSchema<OUT>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the
deserializerof theConsumerRecordfor KafkaSource. - setField(Object, String, Object) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
-
Sets the field
fieldNameon the given Objectobjecttovalueusing reflection. - setFlushOnCheckpoint(boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
If set to true, the Flink producer will wait for all outstanding messages in the Kafka buffers to be acknowledged by the Kafka producer on a checkpoint.
- setGroupId(String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the consumer group id of the KafkaSource.
- setKafkaKeySerializer(Class<? extends Serializer<? super T>>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets Kafka's
Serializerto serialize incoming elements to the key of theProducerRecord. - setKafkaKeySerializer(Class<S>, Map<String, String>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a configurable Kafka
Serializerand pass a configuration to serialize incoming elements to the key of theProducerRecord. - setKafkaMetric(Metric) - Method in class org.apache.flink.streaming.connectors.kafka.internals.metrics.KafkaMetricMutableWrapper
- setKafkaProducerConfig(Properties) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the configuration which used to instantiate all used
KafkaProducer. - setKafkaSubscriber(KafkaSubscriber) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a custom Kafka subscriber to use to discover new splits.
- setKafkaValueSerializer(Class<? extends Serializer<? super T>>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets Kafka's
Serializerto serialize incoming elements to the value of theProducerRecord. - setKafkaValueSerializer(Class<S>, Map<String, String>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a configurable Kafka
Serializerand pass a configuration to serialize incoming elements to the value of theProducerRecord. - setKeySerializationSchema(SerializationSchema<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a
SerializationSchemawhich is used to serialize the incoming element to the key of theProducerRecord. - setLogFailuresOnly(boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Defines whether the producer should fail on errors, or only log them.
- setLogFailuresOnly(boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Defines whether the producer should fail on errors, or only log them.
- setNumParallelInstances(int) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- setNumParallelInstances(int) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Sets the parallelism with which the parallel task of the Kafka Producer runs.
- setOffset(long) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- setParallelInstanceId(int) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- setParallelInstanceId(int) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Sets the number of the parallel subtask that the Kafka Producer is running on.
- setPartitioner(FlinkKafkaPartitioner<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a custom partitioner determining the target partition of the target topic.
- setPartitions(int[]) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- setPartitions(int[]) - Method in interface org.apache.flink.streaming.connectors.kafka.KafkaContextAware
-
Sets the available partitions for the topic returned from
KafkaContextAware.getTargetTopic(Object). - setPartitions(Set<TopicPartition>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a set of partitions to consume from.
- setProperties(Properties) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set arbitrary properties for the KafkaSource and KafkaConsumer.
- setProperty(String, String) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
- setProperty(String, String) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set an arbitrary property for the KafkaSource and KafkaConsumer.
- setRecordSerializer(KafkaRecordSerializationSchema<IN>) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the
KafkaRecordSerializationSchemathat transforms incoming records toProducerRecords. - setStartFromEarliest() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading from the earliest offset for all partitions.
- setStartFromGroupOffsets() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading from any committed group offsets found in Zookeeper / Kafka brokers.
- setStartFromLatest() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading from the latest offset for all partitions.
- setStartFromSpecificOffsets(Map<KafkaTopicPartition, Long>) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading partitions from specific offsets, set independently for each partition.
- setStartFromTimestamp(long) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
-
Specifies the consumer to start reading partitions from a specified timestamp.
- setStartingOffsets(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Specify from which offsets the KafkaSource should start consuming from by providing an
OffsetsInitializer. - setTopic(String) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a fixed topic which used as destination for all records.
- setTopicPattern(Pattern) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a topic pattern to consume from use the java
Pattern. - setTopics(String...) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a list of topics the KafkaSource should consume from.
- setTopics(List<String>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Set a list of topics the KafkaSource should consume from.
- setTopicSelector(TopicSelector<? super T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a topic selector which computes the target topic for every incoming record.
- setTransactionalIdPrefix(String) - Method in class org.apache.flink.connector.kafka.sink.KafkaSinkBuilder
-
Sets the prefix for all created transactionalIds if
DeliveryGuarantee.EXACTLY_ONCEis configured. - setTransactionalIdPrefix(String) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Specifies the prefix of the transactional.id property to be used by the producers when communicating with Kafka.
- setUnbounded(OffsetsInitializer) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
By default the KafkaSource is set to run as
Boundedness.CONTINUOUS_UNBOUNDEDand thus never stops until the Flink job fails or is canceled. - setValueOnlyDeserializer(DeserializationSchema<OUT>) - Method in class org.apache.flink.connector.kafka.source.KafkaSourceBuilder
-
Sets the
deserializerof theConsumerRecordfor KafkaSource. - setValueSerializationSchema(SerializationSchema<T>) - Method in class org.apache.flink.connector.kafka.sink.KafkaRecordSerializationSchemaBuilder
-
Sets a
SerializationSchemawhich is used to serialize the incoming element to the value of theProducerRecord. - setWriteTimestamp(boolean) - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper
- setWriteTimestampToKafka(boolean) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.If set to true, Flink will write the (event time) timestamp attached to each record into Kafka.
- shutdown() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaConsumerThread
-
Shuts this thread down, waking up the thread gracefully if blocked (without Thread.interrupt() calls).
- SINK_BUFFER_FLUSH_INTERVAL - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SINK_BUFFER_FLUSH_MAX_ROWS - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SINK_CHANGELOG_MODE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory.EncodingFormatWrapper
- SINK_PARALLELISM - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SINK_PARTITIONER - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- SinkBufferFlushMode - Class in org.apache.flink.streaming.connectors.kafka.table
-
Sink buffer flush configuration.
- SinkBufferFlushMode(int, long) - Constructor for class org.apache.flink.streaming.connectors.kafka.table.SinkBufferFlushMode
- size() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
-
Gets the number of elements currently in the queue.
- snapshotConfiguration() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.ContextStateSerializer
-
Deprecated.
- snapshotConfiguration() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHintSerializer
-
Deprecated.
- snapshotConfiguration() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- snapshotCurrentState() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Takes a snapshot of the partition offsets.
- snapshotState(long) - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
- snapshotState(long) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- snapshotState(FunctionSnapshotContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase
- snapshotState(FunctionSnapshotContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.
- snapshotState(FunctionSnapshotContext) - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
- sourceContext - Variable in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
The source context to emit records and watermarks to.
- SourceContextWatermarkOutputAdapter<T> - Class in org.apache.flink.streaming.connectors.kafka.internals
-
A
WatermarkOutputthat forwards calls to aSourceFunction.SourceContext. - SourceContextWatermarkOutputAdapter(SourceFunction.SourceContext<T>) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.SourceContextWatermarkOutputAdapter
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from user-supplied specific offsets for each partition.
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from user-supplied specific offsets for each partition.
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- SPECIFIC_OFFSETS - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- specificBoundedOffsets - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Specific end offsets; only relevant when bounded mode is
BoundedMode.SPECIFIC_OFFSETS. - specificStartupOffsets - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Specific startup offsets; only relevant when startup mode is
StartupMode.SPECIFIC_OFFSETS. - splitId() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- start() - Method in class org.apache.flink.connector.kafka.source.enumerator.KafkaSourceEnumerator
-
Start the enumerator.
- startupMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The startup mode for the contained consumer (default is
StartupMode.GROUP_OFFSETS). - StartupMode - Enum in org.apache.flink.streaming.connectors.kafka.config
-
Startup modes for the Kafka Consumer.
- startupTimestampMillis - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The start timestamp to locate partition offsets; only relevant when startup mode is
StartupMode.TIMESTAMP. - subscribedPartitionStates() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Gets all partitions (with partition state) that this fetcher is subscribed to.
- supportsMetadataProjection() - Method in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- sync(Metric, Counter) - Static method in class org.apache.flink.connector.kafka.MetricUtil
-
Ensures that the counter has the same value as the given Kafka metric.
T
- tableIdentifier - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
- TAG_REC_WITH_TIMESTAMP - Static variable in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer.KafkaSerializer
- TAG_REC_WITHOUT_TIMESTAMP - Static variable in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer.KafkaSerializer
- TAG_WATERMARK - Static variable in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffleProducer.KafkaSerializer
- timestamp(long) - Static method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializer
-
Get an
OffsetsInitializerwhich initializes the offsets in each partition so that the initialized offset is the offset of the first record whose record timestamp is greater than or equals the given timestamp (milliseconds). - TIMESTAMP - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
End from user-supplied timestamp for each partition.
- TIMESTAMP - org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Start from user-supplied timestamp for each partition.
- TIMESTAMP - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- TIMESTAMP - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- toKafkaPartitionSplit() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplitState
-
Use the current offset as the starting offset to create a new KafkaPartitionSplit.
- topic - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
The Kafka topic to write to.
- TOPIC - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- TOPIC_GROUP - Static variable in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
- TOPIC_PATTERN - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- topicPartitionsMap - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Partitions of each topic.
- topicPartitionsMap - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase
-
Partitions of each topic.
- topicPattern - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The Kafka topic pattern to consume.
- topics - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
The Kafka topics to consume.
- TopicSelector<IN> - Interface in org.apache.flink.connector.kafka.sink
-
Selects a topic for the incoming record.
- toSplitId(TopicPartition) - Static method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- toSplitType(String, KafkaPartitionSplitState) - Method in class org.apache.flink.connector.kafka.source.reader.KafkaSourceReader
- toString() - Method in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.KafkaTransactionState
-
Deprecated.
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.NextTransactionalIdHint
-
Deprecated.
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.internals.ClosableBlockingQueue
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionLeader
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionState
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartitionStateWithWatermarkGenerator
- toString() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicsDescriptor
- toString() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- toString() - Method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
- toString(List<KafkaTopicPartition>) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- toString(Map<KafkaTopicPartition, Long>) - Static method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaTopicPartition
- TRANSACTIONAL_ID_PREFIX - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- transactionalId - Variable in class org.apache.flink.streaming.connectors.kafka.internals.FlinkKafkaInternalProducer
- TransactionalIdsGenerator - Class in org.apache.flink.streaming.connectors.kafka.internals
-
Class responsible for generating transactional ids to use when communicating with Kafka.
- TransactionalIdsGenerator(String, int, int, int, int) - Constructor for class org.apache.flink.streaming.connectors.kafka.internals.TransactionalIdsGenerator
- TransactionStateSerializer() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer
-
Deprecated.
- TransactionStateSerializer() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.TransactionStateSerializer
- TransactionStateSerializerSnapshot() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.TransactionStateSerializer.TransactionStateSerializerSnapshot
-
Deprecated.
- TransactionStateSerializerSnapshot() - Constructor for class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011.TransactionStateSerializer.TransactionStateSerializerSnapshot
- TypeInformationKeyValueSerializationSchema<K,V> - Class in org.apache.flink.streaming.util.serialization
-
A serialization and deserialization schema for Key Value Pairs that uses Flink's serialization stack to transform typed from and to byte arrays.
- TypeInformationKeyValueSerializationSchema(Class<K>, Class<V>, ExecutionConfig) - Constructor for class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
Creates a new de-/serialization schema for the given types.
- TypeInformationKeyValueSerializationSchema(TypeInformation<K>, TypeInformation<V>, ExecutionConfig) - Constructor for class org.apache.flink.streaming.util.serialization.TypeInformationKeyValueSerializationSchema
-
Creates a new de-/serialization schema for the given types.
U
- unassignedPartitionsQueue - Variable in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Queue of partitions that are not yet assigned to any Kafka clients for consuming.
- UNBOUNDED - org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
Do not end consuming.
- UNBOUNDED - org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
- updateNumBytesInCounter() - Method in class org.apache.flink.connector.kafka.source.metrics.KafkaSourceReaderMetrics
-
Update
MetricNames.IO_NUM_BYTES_IN. - UpsertKafkaDynamicTableFactory - Class in org.apache.flink.streaming.connectors.kafka.table
-
Upsert-Kafka factory.
- UpsertKafkaDynamicTableFactory() - Constructor for class org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaDynamicTableFactory
- UpsertKafkaDynamicTableFactory.DecodingFormatWrapper - Class in org.apache.flink.streaming.connectors.kafka.table
-
It is used to wrap the decoding format and expose the desired changelog mode.
- UpsertKafkaDynamicTableFactory.EncodingFormatWrapper - Class in org.apache.flink.streaming.connectors.kafka.table
-
It is used to wrap the encoding format and expose the desired changelog mode.
- upsertMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Flag to determine sink mode.
- upsertMode - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Flag to determine source mode.
V
- VALID_STARTING_OFFSET_MARKERS - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- VALID_STOPPING_OFFSET_MARKERS - Static variable in class org.apache.flink.connector.kafka.source.split.KafkaPartitionSplit
- validate(Properties) - Method in interface org.apache.flink.connector.kafka.source.enumerator.initializer.OffsetsInitializerValidator
-
Validate offsets initializer with properties of Kafka source.
- VALUE_FIELDS_INCLUDE - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- VALUE_FORMAT - Static variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions
- valueDecodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Format for decoding values from Kafka.
- valueEncodingFormat - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Format for encoding values to Kafka.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
Deprecated.Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
Returns the enum constant of this type with the specified name.
- valueOf(String) - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
-
Returns the enum constant of this type with the specified name.
- valueOnly(Class<? extends Deserializer<V>>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a Kafka
Deserializerto aKafkaRecordDeserializationSchema. - valueOnly(Class<D>, Map<String, String>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a Kafka
Deserializerto aKafkaRecordDeserializationSchema. - valueOnly(DeserializationSchema<V>) - Static method in interface org.apache.flink.connector.kafka.source.reader.deserializer.KafkaRecordDeserializationSchema
-
Wraps a
DeserializationSchemaas the value deserialization schema of theConsumerRecords. - valueProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSink
-
Indices that determine the value fields and the source position in the consumed row.
- valueProjection - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Indices that determine the value fields and the target position in the produced row.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.config.BoundedMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.config.OffsetCommitMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.config.StartupMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.FlinkKafkaErrorCode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.Semantic
-
Deprecated.Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanBoundedMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ScanStartupMode
-
Returns an array containing the constants of this enum type, in the order they are declared.
- values() - Static method in enum org.apache.flink.streaming.connectors.kafka.table.KafkaConnectorOptions.ValueFieldsStrategy
-
Returns an array containing the constants of this enum type, in the order they are declared.
W
- wakeup() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Interrupt an in-progress discovery attempt by throwing a
AbstractPartitionDiscoverer.WakeupException. - wakeUp() - Method in class org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader
- wakeupConnections() - Method in class org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer
-
Attempt to eagerly wakeup from blocking calls to Kafka in
AbstractPartitionDiscoverer.getAllTopics()andAbstractPartitionDiscoverer.getAllPartitionsForTopics(List). - wakeupConnections() - Method in class org.apache.flink.streaming.connectors.kafka.internals.KafkaPartitionDiscoverer
- WakeupException() - Constructor for exception org.apache.flink.streaming.connectors.kafka.internals.AbstractPartitionDiscoverer.WakeupException
- WakeupException() - Constructor for exception org.apache.flink.streaming.connectors.kafka.internals.Handover.WakeupException
- wakeupProducer() - Method in class org.apache.flink.streaming.connectors.kafka.internals.Handover
-
Wakes the producer thread up.
- watermarkOutput - Variable in class org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher
-
Wrapper around our SourceContext for allowing the
WatermarkGeneratorto emit watermarks and mark idleness. - watermarkStrategy - Variable in class org.apache.flink.streaming.connectors.kafka.table.KafkaDynamicSource
-
Watermark strategy that is used to generate per-partition watermark.
- writeKeyBy(DataStream<T>, String, Properties, int...) - Static method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
- writeKeyBy(DataStream<T>, String, Properties, KeySelector<T, K>) - Static method in class org.apache.flink.streaming.connectors.kafka.shuffle.FlinkKafkaShuffle
- writeTimestampToKafka - Variable in class org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer
-
Deprecated.Flag controlling whether we are writing the Flink record's timestamp into Kafka.
All Classes All Packages