| 接口和说明 |
|---|
| org.apache.flink.table.sources.DefinedFieldMapping
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.DefinedProctimeAttribute
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use the concept of computed
columns instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.DefinedRowtimeAttributes
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use the concept of computed
columns instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.FieldComputer
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use the concept of computed
columns instead. See FLIP-95 for more information. |
| org.apache.flink.table.factories.FileSystemFormatFactory
This interface has been replaced by
BulkReaderFormatFactory and BulkWriterFormatFactory. |
| org.apache.flink.table.sources.FilterableTableSource
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use SupportsFilterPushDown instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.LimitableTableSource
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use SupportsLimitPushDown instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.LookupableTableSource
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use LookupTableSource
instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.NestedFieldsProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use SupportsProjectionPushDown instead. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.OverwritableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink which only works with the Blink planner. Use SupportsOverwrite
instead. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.PartitionableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink which only works with the Blink planner. Use SupportsPartitioning
instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.PartitionableTableSource
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use SupportsPartitionPushDown instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.ProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use SupportsProjectionPushDown instead. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.TableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.TableSource
This interface has been replaced by
DynamicTableSource. The new interface
produces internal data structures and only works with the Blink planner. See FLIP-95 for more
information. |
| 类和说明 |
|---|
| org.apache.flink.table.dataview.ListViewSerializer |
| org.apache.flink.table.dataview.ListViewSerializerSnapshot |
| org.apache.flink.table.dataview.ListViewTypeInfo |
| org.apache.flink.table.dataview.ListViewTypeInfoFactory |
| org.apache.flink.table.dataview.MapViewSerializer |
| org.apache.flink.table.dataview.MapViewSerializerSnapshot |
| org.apache.flink.table.dataview.MapViewTypeInfo |
| org.apache.flink.table.dataview.MapViewTypeInfoFactory |
| org.apache.flink.table.dataview.NullAwareMapSerializer |
| org.apache.flink.table.dataview.NullAwareMapSerializerSnapshot |
| org.apache.flink.table.sources.RowtimeAttributeDescriptor
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use the concept of computed
columns instead. See FLIP-95 for more information. |
| org.apache.flink.table.util.TableConnectorUtil
Use
TableConnectorUtils instead. |
| org.apache.flink.table.typeutils.TimeIndicatorTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
| org.apache.flink.table.typeutils.TimeIntervalTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
| org.apache.flink.table.sources.tsextractors.TimestampExtractor
This interface will not be supported in the new source design around
DynamicTableSource which only works with the Blink planner. Use the concept of computed
columns instead. See FLIP-95 for more information. |
| org.apache.flink.table.types.logical.TypeInformationRawType
Use
RawType instead. |
| org.apache.flink.table.api.Types
This class will be removed in future versions as it uses the old type system. It is
recommended to use
DataTypes instead which uses the new type system based on
instances of DataType. Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.utils.TypeStringUtils
This utility is based on
TypeInformation. However, the Table & SQL API is
currently updated to use DataTypes based on LogicalTypes. Use LogicalTypeParser instead. |
| 方法和说明 |
|---|
| org.apache.flink.table.sinks.TableSink.configure(String[], TypeInformation<?>[])
This method will be dropped in future versions. It is recommended to pass a
static schema when instantiating the sink instead.
|
org.apache.flink.table.factories.TableSinkFactory.createTableSink(Map<String, String>)
TableSinkFactory.Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.TableSinkFactory.createTableSink(ObjectPath, CatalogTable)
TableSinkFactory.Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.TableSourceFactory.createTableSource(Map<String, String>)
TableSourceFactory.Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
org.apache.flink.table.factories.TableSourceFactory.createTableSource(ObjectPath, CatalogTable)
TableSourceFactory.Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
| org.apache.flink.table.descriptors.Schema.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type system.
Please use
Schema.field(String, DataType) instead. |
| org.apache.flink.table.api.TableSchema.Builder.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type
system. It is recommended to use
TableSchema.Builder.field(String, DataType) instead which uses
the new type system based on DataTypes. Please make sure to use either the
old or the new type system consistently to avoid unintended behavior. See the website
documentation for more information. |
| org.apache.flink.table.api.TableSchema.fromTypeInfo(TypeInformation<?>)
This method will be removed soon. Use
DataTypes to declare types. |
| org.apache.flink.table.functions.ImperativeAggregateFunction.getAccumulatorType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.sinks.TableSink.getFieldNames()
Use the field names of
TableSink.getTableSchema() instead. |
| org.apache.flink.table.api.TableSchema.getFieldType(int)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataType(int) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.api.TableSchema.getFieldType(String)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataType(String) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.sinks.TableSink.getFieldTypes()
Use the field types of
TableSink.getTableSchema() instead. |
| org.apache.flink.table.api.TableSchema.getFieldTypes()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataTypes() instead which uses the new type system
based on DataTypes. Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.plan.stats.ColumnStats.getMaxValue() |
| org.apache.flink.table.plan.stats.ColumnStats.getMinValue() |
| org.apache.flink.table.sinks.TableSink.getOutputType()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSink.getConsumedDataType() instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.functions.ScalarFunction.getParameterTypes(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.TableFunction.getParameterTypes(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.catalog.CatalogBaseTable.getProperties() |
| org.apache.flink.table.functions.TableFunction.getResultType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ImperativeAggregateFunction.getResultType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ScalarFunction.getResultType(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.sources.TableSource.getReturnType()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSource.getProducedDataType() instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.catalog.Catalog.getTableFactory()
Use
Catalog.getFactory() for the new factory stack. The new factory stack uses
the new table sources and sinks defined in FLIP-95 and a slightly different discovery
mechanism. |
| org.apache.flink.table.sources.TableSource.getTableSchema()
Table schema is a logical description of a table and should not be part of the
physical TableSource. Define schema when registering a Table either in DDL or in
TableEnvironment#connect(...). |
| org.apache.flink.table.api.TableColumn.of(String, DataType)
Use
TableColumn.physical(String, DataType) instead. |
| org.apache.flink.table.api.TableColumn.of(String, DataType, String)
Use
TableColumn.computed(String, DataType, String) instead. |
| org.apache.flink.table.api.TableSchema.toRowType()
Use
TableSchema.toRowDataType() instead. |
| 构造器和说明 |
|---|
| org.apache.flink.table.plan.stats.ColumnStats(Long, Long, Double, Integer, Number, Number) |
| org.apache.flink.table.api.dataview.ListView(TypeInformation<?>)
This method uses the old type system. Please use a
DataTypeHint instead
if the reflective type extraction is not successful. |
| org.apache.flink.table.api.dataview.MapView(TypeInformation<?>, TypeInformation<?>)
This method uses the old type system. Please use a
DataTypeHint instead
if the reflective type extraction is not successful. |
| org.apache.flink.table.api.TableSchema(String[], TypeInformation<?>[])
Use the
TableSchema.Builder instead. |
Copyright © 2014–2021 The Apache Software Foundation. All rights reserved.