Class DynamoDbSink<InputT>
- java.lang.Object
-
- org.apache.flink.connector.base.sink.AsyncSinkBase<InputT,DynamoDbWriteRequest>
-
- org.apache.flink.connector.dynamodb.sink.DynamoDbSink<InputT>
-
- Type Parameters:
InputT- Type of the elements handled by this sink
- All Implemented Interfaces:
Serializable,org.apache.flink.api.connector.sink2.Sink<InputT>,org.apache.flink.api.connector.sink2.StatefulSink<InputT,org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>>
@PublicEvolving public class DynamoDbSink<InputT> extends org.apache.flink.connector.base.sink.AsyncSinkBase<InputT,DynamoDbWriteRequest>
A DynamoDB Sink that performs async requests against a destination table using the buffering protocol specified inAsyncSinkBase.The sink internally uses a
DynamoDbAsyncClientto communicate with the AWS endpoint.The behaviour of the buffering may be specified by providing configuration during the sink build time.
maxBatchSize: the maximum size of a batch of entries that may be written to DynamoDb. DynamoDB client supports only up to 25 elements in the batch.maxInFlightRequests: the maximum number of in flight requests that may exist, if any more in flight requests need to be initiated once the maximum has been reached, then it will be blocked until some have completedmaxBufferedRequests: the maximum number of elements held in the buffer, requests to sink will backpressure while the number of elements in the buffer is at the maximummaxBatchSizeInBytes: this setting will not have any effect on DynamoDBSink batch implementationmaxTimeInBufferMS: the maximum amount of time an entry is allowed to live in the buffer, if any element reaches this age, the entire buffer will be flushed immediatelymaxRecordSizeInBytes: this setting will not have any effect on DynamoDBSink batch implementationfailOnError: when an exception is encountered while persisting to DynamoDb, the job will fail immediately if failOnError is setoverwriteByPartitionKeys: list of attribute key names for the sink to deduplicate on if you want to bypass the no duplication limitation of a single batch write request. Batching DynamoDB sink will drop request items in the buffer if their primary keys(composite) values are the same as the newly added ones. The newer request item in a single batch takes precedence.
Please see the writer implementation in
DynamoDbSinkWriter- See Also:
- Serialized Form
-
-
Constructor Summary
Constructors Modifier Constructor Description protectedDynamoDbSink(org.apache.flink.connector.base.sink.writer.ElementConverter<InputT,DynamoDbWriteRequest> elementConverter, int maxBatchSize, int maxInFlightRequests, int maxBufferedRequests, long maxBatchSizeInBytes, long maxTimeInBufferMS, long maxRecordSizeInBytes, boolean failOnError, String tableName, List<String> overwriteByPartitionKeys, Properties dynamoDbClientProperties)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static <InputT> DynamoDbSinkBuilder<InputT>builder()Create aDynamoDbSinkBuilderto construct a newDynamoDbSink.org.apache.flink.api.connector.sink2.StatefulSink.StatefulSinkWriter<InputT,org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>>createWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context)org.apache.flink.core.io.SimpleVersionedSerializer<org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>>getWriterStateSerializer()org.apache.flink.api.connector.sink2.StatefulSink.StatefulSinkWriter<InputT,org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>>restoreWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context, Collection<org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>> recoveredState)
-
-
-
Constructor Detail
-
DynamoDbSink
protected DynamoDbSink(org.apache.flink.connector.base.sink.writer.ElementConverter<InputT,DynamoDbWriteRequest> elementConverter, int maxBatchSize, int maxInFlightRequests, int maxBufferedRequests, long maxBatchSizeInBytes, long maxTimeInBufferMS, long maxRecordSizeInBytes, boolean failOnError, String tableName, List<String> overwriteByPartitionKeys, Properties dynamoDbClientProperties)
-
-
Method Detail
-
builder
public static <InputT> DynamoDbSinkBuilder<InputT> builder()
Create aDynamoDbSinkBuilderto construct a newDynamoDbSink.- Type Parameters:
InputT- type of incoming records- Returns:
DynamoDbSinkBuilder
-
createWriter
@Internal public org.apache.flink.api.connector.sink2.StatefulSink.StatefulSinkWriter<InputT,org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>> createWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context) throws IOException
- Throws:
IOException
-
restoreWriter
@Internal public org.apache.flink.api.connector.sink2.StatefulSink.StatefulSinkWriter<InputT,org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>> restoreWriter(org.apache.flink.api.connector.sink2.Sink.InitContext context, Collection<org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>> recoveredState) throws IOException
- Throws:
IOException
-
getWriterStateSerializer
@Internal public org.apache.flink.core.io.SimpleVersionedSerializer<org.apache.flink.connector.base.sink.writer.BufferedRequestState<DynamoDbWriteRequest>> getWriterStateSerializer()
-
-