| 接口和说明 |
|---|
| org.apache.flink.table.api.constraints.Constraint
See
ResolvedSchema and Constraint. |
| org.apache.flink.table.sources.DefinedFieldMapping
This interface will not be supported in the new source design around
DynamicTableSource. See FLIP-95 for more information. |
| org.apache.flink.table.sources.DefinedProctimeAttribute
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.DefinedRowtimeAttributes
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.table.descriptors.Descriptor
Descriptor was primarily used for the legacy connector stack and have been
deprecated. Use TableDescriptor for creating sources and sinks from the Table API. |
| org.apache.flink.table.descriptors.DescriptorValidator
See
Descriptor for details. |
| org.apache.flink.table.sources.FieldComputer
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.factories.FileSystemFormatFactory
This interface has been replaced by
BulkReaderFormatFactory and BulkWriterFormatFactory. |
| org.apache.flink.table.sources.FilterableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsFilterPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.LimitableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsLimitPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.LookupableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use LookupTableSource instead. See FLIP-95 for more information. |
| org.apache.flink.table.sources.NestedFieldsProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsProjectionPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sinks.OverwritableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink. Use SupportsOverwrite instead. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.PartitionableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink. Use SupportsPartitioning instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.PartitionableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsPartitionPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.ProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsProjectionPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sinks.TableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures. See FLIP-95 for more information. |
| org.apache.flink.table.sources.TableSource
This interface has been replaced by
DynamicTableSource. The new interface
produces internal data structures. See FLIP-95 for more information. |
| 类和说明 |
|---|
| org.apache.flink.table.descriptors.ConnectorDescriptorValidator |
| org.apache.flink.table.descriptors.DescriptorProperties
This utility will be dropped soon.
DynamicTableFactory is based on ConfigOption and catalogs use CatalogPropertiesUtil. |
| org.apache.flink.table.descriptors.FileSystemValidator
The legacy CSV connector has been replaced by
FileSource / FileSink.
It is kept only to support tests for the legacy connector stack. |
| org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter
Use
DataTypeFactory.createDataType(TypeInformation) instead. Note that this
method will not create legacy types anymore. It fully uses the new type system available only
in the planner. |
| org.apache.flink.table.dataview.ListViewSerializer |
| org.apache.flink.table.dataview.ListViewSerializerSnapshot |
| org.apache.flink.table.dataview.ListViewTypeInfo |
| org.apache.flink.table.dataview.ListViewTypeInfoFactory |
| org.apache.flink.table.dataview.MapViewSerializer |
| org.apache.flink.table.dataview.MapViewSerializerSnapshot |
| org.apache.flink.table.dataview.MapViewTypeInfo |
| org.apache.flink.table.dataview.MapViewTypeInfoFactory |
| org.apache.flink.table.dataview.NullAwareMapSerializer |
| org.apache.flink.table.dataview.NullAwareMapSerializerSnapshot |
| org.apache.flink.table.descriptors.Rowtime
This class was used for legacy connectors using
Descriptor. |
| org.apache.flink.table.sources.RowtimeAttributeDescriptor
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.descriptors.Schema
This class was used for legacy connectors using
Descriptor. |
| org.apache.flink.table.api.TableColumn
See
ResolvedSchema and Column. |
| org.apache.flink.table.api.TableSchema
This class has been deprecated as part of FLIP-164. It has been replaced by two more
dedicated classes
Schema and ResolvedSchema. Use Schema for
declaration in APIs. ResolvedSchema is offered by the framework after resolution and
validation. |
| org.apache.flink.table.typeutils.TimeIndicatorTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
| org.apache.flink.table.typeutils.TimeIntervalTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
| org.apache.flink.table.sources.tsextractors.TimestampExtractor
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.types.logical.TypeInformationRawType
Use
RawType instead. |
| org.apache.flink.table.api.Types
This class will be removed in future versions as it uses the old type system. It is
recommended to use
DataTypes instead which uses the new type system based on
instances of DataType. Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.utils.TypeStringUtils
This utility is based on
TypeInformation. However, the Table & SQL API is
currently updated to use DataTypes based on LogicalTypes. Use LogicalTypeParser instead. |
| org.apache.flink.table.api.constraints.UniqueConstraint
See
ResolvedSchema and UniqueConstraint. |
| org.apache.flink.table.api.WatermarkSpec
See
ResolvedSchema and WatermarkSpec. |
| 方法和说明 |
|---|
| org.apache.flink.table.sinks.TableSink.configure(String[], TypeInformation<?>[])
This method will be dropped in future versions. It is recommended to pass a
static schema when instantiating the sink instead.
|
| org.apache.flink.table.factories.CatalogFactory.createCatalog(String, Map<String, String>)
Use
this#createCatalog(Context) instead and implement Factory
instead of TableFactory. |
| org.apache.flink.table.factories.ModuleFactory.createModule(Map<String, String>)
Use
ModuleFactory.createModule(Context) instead and implement Factory instead
of TableFactory. |
org.apache.flink.table.factories.TableSinkFactory.createTableSink(Map<String, String>)
TableSinkFactory.Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.TableSinkFactory.createTableSink(ObjectPath, CatalogTable)
TableSinkFactory.Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.TableSourceFactory.createTableSource(Map<String, String>)
TableSourceFactory.Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
org.apache.flink.table.factories.TableSourceFactory.createTableSource(ObjectPath, CatalogTable)
TableSourceFactory.Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
| org.apache.flink.table.descriptors.Schema.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type system.
Please use
Schema.field(String, DataType) instead. |
| org.apache.flink.table.api.TableSchema.Builder.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type
system. It is recommended to use
TableSchema.Builder.field(String, DataType) instead which uses
the new type system based on DataTypes. Please make sure to use either the
old or the new type system consistently to avoid unintended behavior. See the website
documentation for more information. |
| org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(DataType)
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful.
|
| org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(DataType[])
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful.
|
| org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType(TypeInformation<?>)
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful.
|
| org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType(TypeInformation<?>[])
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful.
|
| org.apache.flink.table.api.TableSchema.fromTypeInfo(TypeInformation<?>)
This method will be removed soon. Use
DataTypes to declare types. |
| org.apache.flink.table.functions.ImperativeAggregateFunction.getAccumulatorType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.sinks.TableSink.getFieldNames()
Use the field names of
TableSink.getTableSchema() instead. |
| org.apache.flink.table.api.TableSchema.getFieldType(int)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataType(int) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.api.TableSchema.getFieldType(String)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataType(String) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.sinks.TableSink.getFieldTypes()
Use the field types of
TableSink.getTableSchema() instead. |
| org.apache.flink.table.api.TableSchema.getFieldTypes()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataTypes() instead which uses the new type system
based on DataTypes. Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.plan.stats.ColumnStats.getMaxValue() |
| org.apache.flink.table.plan.stats.ColumnStats.getMinValue() |
| org.apache.flink.table.sinks.TableSink.getOutputType()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSink.getConsumedDataType() instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.functions.ScalarFunction.getParameterTypes(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.TableFunction.getParameterTypes(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.TableFunction.getResultType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ImperativeAggregateFunction.getResultType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ScalarFunction.getResultType(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.sources.TableSource.getReturnType()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSource.getProducedDataType() instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.catalog.CatalogBaseTable.getSchema()
This method returns the deprecated
TableSchema class. The old class was a
hybrid of resolved and unresolved schema information. It has been replaced by the new
Schema which is always unresolved and will be resolved by the framework later. |
| org.apache.flink.table.catalog.ResolvedCatalogBaseTable.getSchema()
This method returns the deprecated
TableSchema class. The old class was a
hybrid of resolved and unresolved schema information. It has been replaced by the new
ResolvedSchema which is resolved by the framework and accessible via ResolvedCatalogBaseTable.getResolvedSchema(). |
| org.apache.flink.table.catalog.Catalog.getTableFactory()
Use
Catalog.getFactory() for the new factory stack. The new factory stack uses
the new table sources and sinks defined in FLIP-95 and a slightly different discovery
mechanism. |
| org.apache.flink.table.sources.TableSource.getTableSchema()
Table schema is a logical description of a table and should not be part of the
physical TableSource. Define schema when registering a Table either in DDL or in
TableEnvironment#connect(...). |
| org.apache.flink.table.api.TableColumn.of(String, DataType)
Use
TableColumn.physical(String, DataType) instead. |
| org.apache.flink.table.api.TableColumn.of(String, DataType, String)
Use
TableColumn.computed(String, DataType, String) instead. |
| org.apache.flink.table.factories.CatalogFactory.requiredContext()
Implement the
Factory based stack instead. |
| org.apache.flink.table.factories.ModuleFactory.requiredContext()
Implement the
Factory based stack instead. |
| org.apache.flink.table.factories.CatalogFactory.supportedProperties()
Implement the
Factory based stack instead. |
| org.apache.flink.table.factories.ModuleFactory.supportedProperties()
Implement the
Factory based stack instead. |
| org.apache.flink.table.catalog.CatalogTable.toProperties()
Only a
ResolvedCatalogTable is serializable to properties. |
| org.apache.flink.table.api.TableSchema.toRowType()
Use
TableSchema.toRowDataType() instead. |
| 构造器和说明 |
|---|
| org.apache.flink.table.plan.stats.ColumnStats(Long, Long, Double, Integer, Number, Number) |
| org.apache.flink.table.api.dataview.ListView(TypeInformation<?>)
This method uses the old type system. Please use a
DataTypeHint instead
if the reflective type extraction is not successful. |
| org.apache.flink.table.api.dataview.MapView(TypeInformation<?>, TypeInformation<?>)
This method uses the old type system. Please use a
DataTypeHint instead
if the reflective type extraction is not successful. |
| org.apache.flink.table.api.TableSchema(String[], TypeInformation<?>[])
Use the
TableSchema.Builder instead. |
Copyright © 2014–2022 The Apache Software Foundation. All rights reserved.