| Interface and Description |
|---|
| org.apache.flink.table.connector.source.AsyncTableFunctionProvider
Please use
AsyncLookupFunctionProvider to implement asynchronous lookup
table. |
| org.apache.flink.table.catalog.CatalogLock
This interface will be removed soon. Please see FLIP-346 for more details.
|
| org.apache.flink.table.catalog.CatalogLock.Factory
This interface will be removed soon. Please see FLIP-346 for more details.
|
| org.apache.flink.table.api.constraints.Constraint
See
ResolvedSchema and Constraint. |
| org.apache.flink.table.sources.DefinedFieldMapping
This interface will not be supported in the new source design around
DynamicTableSource. See FLIP-95 for more information. |
| org.apache.flink.table.sources.DefinedProctimeAttribute
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.DefinedRowtimeAttributes
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
org.apache.flink.table.descriptors.Descriptor
Descriptor was primarily used for the legacy connector stack and have been
deprecated. Use TableDescriptor for creating sources and sinks from the Table API. |
| org.apache.flink.table.descriptors.DescriptorValidator
See
Descriptor for details. |
| org.apache.flink.table.sources.FieldComputer
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.FilterableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsFilterPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.LimitableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsLimitPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.LookupableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use LookupTableSource instead. See FLIP-95 for more information. |
| org.apache.flink.table.factories.ManagedTableFactory
This interface will be removed soon. Please see FLIP-346 for more details.
|
| org.apache.flink.table.sources.NestedFieldsProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsProjectionPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sinks.OverwritableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink. Use SupportsOverwrite instead. See FLIP-95 for more information. |
| org.apache.flink.table.sinks.PartitionableTableSink
This interface will not be supported in the new sink design around
DynamicTableSink. Use SupportsPartitioning instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.PartitionableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsPartitionPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.sources.ProjectableTableSource
This interface will not be supported in the new source design around
DynamicTableSource. Use SupportsProjectionPushDown instead. See FLIP-95 for more
information. |
| org.apache.flink.table.connector.RequireCatalogLock
This interface will be removed soon. Please see FLIP-346 for more details.
|
| org.apache.flink.table.connector.sink.SinkProvider
Please convert your sink to
Sink and use
SinkV2Provider. |
| org.apache.flink.table.factories.TableFactory
This interface has been replaced by
Factory. |
| org.apache.flink.table.connector.source.TableFunctionProvider
Please use
LookupFunctionProvider to implement synchronous lookup table. |
| org.apache.flink.table.sinks.TableSink
This interface has been replaced by
DynamicTableSink. The new interface
consumes internal data structures. See FLIP-95 for more information. |
| org.apache.flink.table.factories.TableSinkFactory
This interface has been replaced by
DynamicTableSinkFactory. The new
interface consumes internal data structures. See FLIP-95 for more information. |
| org.apache.flink.table.sources.TableSource
This interface has been replaced by
DynamicTableSource. The new interface
produces internal data structures. See FLIP-95 for more information. |
| org.apache.flink.table.factories.TableSourceFactory
This interface has been replaced by
DynamicTableSourceFactory. The new
interface produces internal data structures. See FLIP-95 for more information. |
| Class and Description |
|---|
| org.apache.flink.table.functions.AggregateFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
| org.apache.flink.table.descriptors.ConnectorDescriptorValidator |
| org.apache.flink.table.descriptors.DescriptorProperties
This utility will be dropped soon.
DynamicTableFactory is based on ConfigOption and catalogs use CatalogPropertiesUtil. |
| org.apache.flink.table.descriptors.FileSystemValidator
The legacy CSV connector has been replaced by
FileSource / FileSink.
It is kept only to support tests for the legacy connector stack. |
| org.apache.flink.table.types.utils.LegacyTypeInfoDataTypeConverter
Use
DataTypeFactory.createDataType(TypeInformation) instead. Note that this
method will not create legacy types anymore. It fully uses the new type system available only
in the planner. |
| org.apache.flink.table.functions.LegacyUserDefinedFunctionInference |
| org.apache.flink.table.dataview.ListViewSerializer |
| org.apache.flink.table.dataview.ListViewSerializerSnapshot |
| org.apache.flink.table.dataview.ListViewTypeInfo |
| org.apache.flink.table.dataview.ListViewTypeInfoFactory |
| org.apache.flink.table.dataview.MapViewSerializer |
| org.apache.flink.table.dataview.MapViewSerializerSnapshot |
| org.apache.flink.table.dataview.MapViewTypeInfo |
| org.apache.flink.table.dataview.MapViewTypeInfoFactory |
| org.apache.flink.table.dataview.NullAwareMapSerializer |
| org.apache.flink.table.dataview.NullAwareMapSerializerSnapshot |
| org.apache.flink.table.descriptors.Rowtime
This class was used for legacy connectors using
Descriptor. |
| org.apache.flink.table.sources.RowtimeAttributeDescriptor
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.functions.ScalarFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
| org.apache.flink.table.descriptors.Schema
This class was used for legacy connectors using
Descriptor. |
| org.apache.flink.table.functions.TableAggregateFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
| org.apache.flink.table.api.TableColumn
See
ResolvedSchema and Column. |
| org.apache.flink.table.factories.TableFactoryService |
| org.apache.flink.table.functions.TableFunctionDefinition
Non-legacy functions can simply omit this wrapper for declarations.
|
| org.apache.flink.table.api.TableSchema
This class has been deprecated as part of FLIP-164. It has been replaced by two more
dedicated classes
Schema and ResolvedSchema. Use Schema for
declaration in APIs. ResolvedSchema is offered by the framework after resolution and
validation. |
| org.apache.flink.table.sinks.TableSinkBase
This class is implementing the deprecated
TableSink interface. Implement
DynamicTableSink directly instead. |
| org.apache.flink.table.factories.TableSinkFactoryContextImpl |
| org.apache.flink.table.factories.TableSourceFactoryContextImpl |
| org.apache.flink.table.typeutils.TimeIndicatorTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
| org.apache.flink.table.typeutils.TimeIntervalTypeInfo
This class will be removed in future versions as it is used for the old type system.
It is recommended to use
DataTypes instead. Please make sure to use either the old or
the new type system consistently to avoid unintended behavior. See the website documentation
for more information. |
| org.apache.flink.table.sources.tsextractors.TimestampExtractor
This interface will not be supported in the new source design around
DynamicTableSource. Use the concept of computed columns instead. See FLIP-95 for more
information. |
| org.apache.flink.table.types.logical.TypeInformationRawType
Use
RawType instead. |
| org.apache.flink.table.api.Types
This class will be removed in future versions as it uses the old type system. It is
recommended to use
DataTypes instead which uses the new type system based on
instances of DataType. Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.utils.TypeStringUtils
This utility is based on
TypeInformation. However, the Table & SQL API is
currently updated to use DataTypes based on LogicalTypes. Use LogicalTypeParser instead. |
| org.apache.flink.table.api.constraints.UniqueConstraint
See
ResolvedSchema and UniqueConstraint. |
| org.apache.flink.table.api.WatermarkSpec
See
ResolvedSchema and WatermarkSpec. |
| Exceptions and Description |
|---|
| org.apache.flink.table.api.AmbiguousTableFactoryException
This exception is considered internal and has been erroneously placed in the *.api
package. It is replaced by
AmbiguousTableFactoryException and should not be used
directly anymore. |
| org.apache.flink.table.api.ExpressionParserException
This exception is considered internal and has been erroneously placed in the *.api
package. It is replaced by
ExpressionParserException and should not be used directly
anymore. |
| org.apache.flink.table.api.NoMatchingTableFactoryException
This exception is considered internal and has been erroneously placed in the *.api
package. It is replaced by
NoMatchingTableFactoryException and should not be used
directly anymore. |
| Field and Description |
|---|
| org.apache.flink.table.api.dataview.ListView.elementType |
| org.apache.flink.table.api.dataview.MapView.keyType |
| org.apache.flink.table.module.CommonModuleOptions.MODULE_TYPE
This is only required for the legacy factory stack
|
| org.apache.flink.table.descriptors.Schema.SCHEMA_TYPE |
| org.apache.flink.table.api.dataview.MapView.valueType |
| Method and Description |
|---|
| org.apache.flink.table.connector.source.abilities.SupportsProjectionPushDown.applyProjection(int[][])
Please implement
SupportsProjectionPushDown.applyProjection(int[][], DataType) |
| org.apache.flink.table.sinks.TableSink.configure(String[], TypeInformation<?>[])
This method will be dropped in future versions. It is recommended to pass a
static schema when instantiating the sink instead.
|
| org.apache.flink.table.factories.CatalogFactory.createCatalog(String, Map<String, String>)
Use
this#createCatalog(Context) instead and implement Factory
instead of TableFactory. |
| org.apache.flink.table.factories.FactoryUtil.createDynamicTableSink(DynamicTableSinkFactory, ObjectIdentifier, ResolvedCatalogTable, ReadableConfig, ClassLoader, boolean) |
| org.apache.flink.table.factories.FactoryUtil.createDynamicTableSource(DynamicTableSourceFactory, ObjectIdentifier, ResolvedCatalogTable, ReadableConfig, ClassLoader, boolean) |
| org.apache.flink.table.factories.FunctionDefinitionFactory.createFunctionDefinition(String, CatalogFunction)
Please implement
FunctionDefinitionFactory.createFunctionDefinition(String, CatalogFunction,
Context) instead. |
| org.apache.flink.table.factories.ModuleFactory.createModule(Map<String, String>)
Use
ModuleFactory.createModule(Context) instead and implement Factory instead
of TableFactory. |
| org.apache.flink.table.factories.FactoryUtil.createTableSink(Catalog, ObjectIdentifier, ResolvedCatalogTable, ReadableConfig, ClassLoader, boolean) |
org.apache.flink.table.factories.TableSinkFactory.createTableSink(Map<String, String>)
TableSinkFactory.Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
org.apache.flink.table.factories.TableSinkFactory.createTableSink(ObjectPath, CatalogTable)
TableSinkFactory.Context contains more information, and already contains table schema too.
Please use TableSinkFactory.createTableSink(Context) instead. |
| org.apache.flink.table.factories.FactoryUtil.createTableSource(Catalog, ObjectIdentifier, ResolvedCatalogTable, ReadableConfig, ClassLoader, boolean) |
org.apache.flink.table.factories.TableSourceFactory.createTableSource(Map<String, String>)
TableSourceFactory.Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
org.apache.flink.table.factories.TableSourceFactory.createTableSource(ObjectPath, CatalogTable)
TableSourceFactory.Context contains more information, and already contains table schema too.
Please use TableSourceFactory.createTableSource(Context) instead. |
| org.apache.flink.table.descriptors.Schema.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type system.
Please use
Schema.field(String, DataType) instead. |
| org.apache.flink.table.api.TableSchema.Builder.field(String, TypeInformation<?>)
This method will be removed in future versions as it uses the old type
system. It is recommended to use
TableSchema.Builder.field(String, DataType) instead which uses
the new type system based on DataTypes. Please make sure to use either the
old or the new type system consistently to avoid unintended behavior. See the website
documentation for more information. |
| org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(DataType)
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful. Sources and sinks should use the method available in
context to convert, within the planner you should use either
InternalTypeInfo or
ExternalTypeInfo depending on the use case. |
| org.apache.flink.table.types.utils.TypeConversions.fromDataTypeToLegacyInfo(DataType[])
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful. Sources and sinks should use the method available in
context to convert, within the planner you should use either
InternalTypeInfo or
ExternalTypeInfo depending on the use case. |
| org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType(TypeInformation<?>)
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful. Sources and sinks should use the method available in
context to convert, within the planner you should use either
InternalTypeInfo or
ExternalTypeInfo depending on the use case. |
| org.apache.flink.table.types.utils.TypeConversions.fromLegacyInfoToDataType(TypeInformation<?>[])
Please don't use this method anymore. It will be removed soon and we should not
make the removal more painful. Sources and sinks should use the method available in
context to convert, within the planner you should use either
InternalTypeInfo or
ExternalTypeInfo depending on the use case. |
| org.apache.flink.table.api.TableSchema.fromTypeInfo(TypeInformation<?>)
This method will be removed soon. Use
DataTypes to declare types. |
| org.apache.flink.table.functions.ImperativeAggregateFunction.getAccumulatorType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.sinks.TableSink.getFieldNames()
Use the field names of
TableSink.getTableSchema() instead. |
| org.apache.flink.table.api.TableSchema.getFieldType(int)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataType(int) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.api.TableSchema.getFieldType(String)
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataType(String) instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.sinks.TableSink.getFieldTypes()
Use the field types of
TableSink.getTableSchema() instead. |
| org.apache.flink.table.api.TableSchema.getFieldTypes()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSchema.getFieldDataTypes() instead which uses the new type system
based on DataTypes. Please make sure to use either the old or the new type system
consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.plan.stats.ColumnStats.getMaxValue() |
| org.apache.flink.table.plan.stats.ColumnStats.getMinValue() |
| org.apache.flink.table.sinks.TableSink.getOutputType()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSink.getConsumedDataType() instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.functions.TableFunction.getParameterTypes(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ScalarFunction.getParameterTypes(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ImperativeAggregateFunction.getResultType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.TableFunction.getResultType()
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.functions.ScalarFunction.getResultType(Class<?>[])
This method uses the old type system and is based on the old reflective
extraction logic. The method will be removed in future versions and is only called when
using the deprecated
TableEnvironment.registerFunction(...) method. The new
reflective extraction logic (possibly enriched with DataTypeHint and FunctionHint) should be powerful enough to cover most use cases. For advanced users, it
is possible to override UserDefinedFunction.getTypeInference(DataTypeFactory). |
| org.apache.flink.table.sources.TableSource.getReturnType()
This method will be removed in future versions as it uses the old type system. It
is recommended to use
TableSource.getProducedDataType() instead which uses the new type
system based on DataTypes. Please make sure to use either the old or the new type
system consistently to avoid unintended behavior. See the website documentation for more
information. |
| org.apache.flink.table.catalog.ResolvedCatalogBaseTable.getSchema()
This method returns the deprecated
TableSchema class. The old class was a
hybrid of resolved and unresolved schema information. It has been replaced by the new
ResolvedSchema which is resolved by the framework and accessible via ResolvedCatalogBaseTable.getResolvedSchema(). |
| org.apache.flink.table.catalog.CatalogBaseTable.getSchema()
This method returns the deprecated
TableSchema class. The old class was a
hybrid of resolved and unresolved schema information. It has been replaced by the new
Schema which is always unresolved and will be resolved by the framework later. |
| org.apache.flink.table.catalog.Catalog.getTableFactory()
Use
Catalog.getFactory() for the new factory stack. The new factory stack uses
the new table sources and sinks defined in FLIP-95 and a slightly different discovery
mechanism. |
| org.apache.flink.table.sources.TableSource.getTableSchema()
Table schema is a logical description of a table and should not be part of the
physical TableSource. Define schema when registering a Table either in DDL or in
TableEnvironment#connect(...). |
| org.apache.flink.table.catalog.CatalogFunction.isGeneric()
There is no replacement for this function, as now functions have type inference
strategies
|
| org.apache.flink.table.utils.EncodingUtils.loadClass(String)
Use
EncodingUtils.loadClass(String, ClassLoader) instead, in order to explicitly
provide the correct classloader. |
| org.apache.flink.table.api.TableColumn.of(String, DataType)
Use
TableColumn.physical(String, DataType) instead. |
| org.apache.flink.table.api.TableColumn.of(String, DataType, String)
Use
TableColumn.computed(String, DataType, String) instead. |
| org.apache.flink.table.types.logical.utils.LogicalTypeParser.parse(String)
You should use
LogicalTypeParser.parse(String, ClassLoader) to correctly load user types |
| org.apache.flink.table.types.utils.DataTypeUtils.projectRow(DataType, int[])
Use the
Projection type |
| org.apache.flink.table.types.utils.DataTypeUtils.projectRow(DataType, int[][])
Use the
Projection type |
| org.apache.flink.table.factories.ModuleFactory.requiredContext()
Implement the
Factory based stack instead. |
| org.apache.flink.table.factories.CatalogFactory.requiredContext()
Implement the
Factory based stack instead. |
| org.apache.flink.table.factories.ModuleFactory.supportedProperties()
Implement the
Factory based stack instead. |
| org.apache.flink.table.factories.CatalogFactory.supportedProperties()
Implement the
Factory based stack instead. |
| org.apache.flink.table.catalog.Catalog.supportsManagedTable()
This method will be removed soon. Please see FLIP-346 for more details.
|
| org.apache.flink.table.catalog.CatalogTable.toProperties()
Only a
ResolvedCatalogTable is serializable to properties. |
| org.apache.flink.table.api.TableSchema.toRowType()
Use
TableSchema.toRowDataType() instead. |
Copyright © 2014–2024 The Apache Software Foundation. All rights reserved.