| Package | Description |
|---|---|
| org.apache.flink.orc | |
| org.apache.flink.orc.shim |
| Modifier and Type | Field and Description |
|---|---|
protected OrcShim<BatchT> |
AbstractOrcFileInputFormat.shim |
| Modifier and Type | Method and Description |
|---|---|
static <SplitT extends org.apache.flink.connector.file.src.FileSourceSplit> |
OrcColumnarRowInputFormat.createPartitionedFormat(OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> shim,
org.apache.hadoop.conf.Configuration hadoopConfig,
org.apache.flink.table.types.logical.RowType tableType,
List<String> partitionKeys,
org.apache.flink.connector.file.table.PartitionFieldExtractor<SplitT> extractor,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
java.util.function.Function<org.apache.flink.table.types.logical.RowType,org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData>> rowTypeInfoFactory)
Create a partitioned
OrcColumnarRowInputFormat, the partition columns can be
generated by split. |
| Constructor and Description |
|---|
AbstractOrcFileInputFormat(OrcShim<BatchT> shim,
org.apache.hadoop.conf.Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize) |
OrcColumnarRowFileInputFormat(OrcShim shim,
org.apache.hadoop.conf.Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List conjunctPredicates,
int batchSize,
ColumnBatchFactory batchFactory,
org.apache.flink.api.common.typeinfo.TypeInformation producedTypeInfo)
Deprecated.
|
OrcColumnarRowInputFormat(OrcShim<BatchT> shim,
org.apache.hadoop.conf.Configuration hadoopConfig,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
ColumnBatchFactory<BatchT,SplitT> batchFactory,
org.apache.flink.api.common.typeinfo.TypeInformation<org.apache.flink.table.data.RowData> producedTypeInfo) |
OrcColumnarRowSplitReader(OrcShim<BATCH> shim,
org.apache.hadoop.conf.Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
OrcColumnarRowSplitReader.ColumnBatchGenerator<BATCH> batchGenerator,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
org.apache.flink.core.fs.Path path,
long splitStart,
long splitLength) |
OrcSplitReader(OrcShim<BATCH> shim,
org.apache.hadoop.conf.Configuration conf,
org.apache.orc.TypeDescription schema,
int[] selectedFields,
List<OrcFilters.Predicate> conjunctPredicates,
int batchSize,
org.apache.flink.core.fs.Path path,
long splitStart,
long splitLength) |
OrcVectorizedReader(OrcShim<BatchT> shim,
org.apache.orc.RecordReader orcReader,
org.apache.flink.connector.file.src.util.Pool<AbstractOrcFileInputFormat.OrcReaderBatch<T,BatchT>> pool) |
| Modifier and Type | Class and Description |
|---|---|
class |
OrcShimV200
Shim orc for Hive version 2.0.0 and upper versions.
|
class |
OrcShimV210
Shim orc for Hive version 2.1.0 and upper versions.
|
class |
OrcShimV230
Shim orc for Hive version 2.1.0 and upper versions.
|
| Modifier and Type | Method and Description |
|---|---|
static OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcShim.createShim(String hiveVersion)
Create shim from hive version.
|
static OrcShim<org.apache.hadoop.hive.ql.exec.vector.VectorizedRowBatch> |
OrcShim.defaultShim()
Default with orc dependent, we should use v2.3.0.
|
Copyright © 2014–2023 The Apache Software Foundation. All rights reserved.