static <T> org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo> |
StreamingSink.compactionWriter(org.apache.flink.table.connector.ProviderContext providerContext,
org.apache.flink.streaming.api.datastream.DataStream<T> inputStream,
long bucketCheckInterval,
org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.BucketsBuilder<T,String,? extends org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder,
FileSystemFactory fsFactory,
org.apache.flink.core.fs.Path path,
CompactReader.Factory<T> readFactory,
long targetFileSize,
int parallelism)
Create a file writer with compaction operators by input stream.
|
static org.apache.flink.streaming.api.datastream.DataStreamSink<?> |
StreamingSink.sink(org.apache.flink.table.connector.ProviderContext providerContext,
org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo> writer,
org.apache.flink.core.fs.Path locationPath,
org.apache.flink.table.catalog.ObjectIdentifier identifier,
List<String> partitionKeys,
TableMetaStoreFactory msFactory,
FileSystemFactory fsFactory,
org.apache.flink.configuration.Configuration options)
Create a sink from file writer.
|