2.4.0 release notes
This release of AI-Link introduces new optimized operations for programmatic interaction with AtScale and expands available methods of authorization:
- Expanded CRUD operation support for data models, perspectives, calculated columns and dimensions.
- Enabled OAuth based connection to Snowflake
Please refer to our API documentation for the latest syntax to use with AI-Link. See below for updates associated with this release.
Data Warehouse Support Updates
- Added support for Postgres data warehouses.
- Added
clear_authfunction to all warehouses, allowing for quick clearing of sensitive authentication information. - Added
tokenparameter to Snowflake object constructor to support OAuth based connection
New Python Helper Functions for Programmatic Interaction
- CRUD operation support: added additional functions to interact with various objects in the semantic layer.
- Creating DataModel: Project now has a function to create a blank DataModel, DataModel now has a function to clone itself.
- Perspective updating: allows user to edit what is hidden by a given perspective.
Non-Functional Updates
- UX Quality of Life Improvements: additional customizations for the creation and management of aggs around udfs
- Bug fixes: bug fixes addressing roleplaying and features with different key/value columns when joining objects to the semantic layer
Changelog for Syntax Updates
enums.py
REMOVED CLASS:
LevelType- functionality moved to the
TimeStepsclass
- functionality moved to the
UPDATED CLASS:
-
TimeSteps- merged in functionality from former class
LevelTypeto avoid duplication
- merged in functionality from former class
-
TimeLevels- renamed class variable
valtoatscale_value
- renamed class variable
NEW CLASS:
Dimension- an enum to represent the metadata of a dimension object for dmv queries.
- this does not have direct customer use cases but is publicly visible.
connection.py::Connection
UPDATED FUNCTIONS:
-
_submit_request- builds in a retry in the event of an internal server error
-
get_connected_schemas- parameter
databaseis now required
- parameter
-
get_connected_tables- parameters
databaseandschemaare now required
- parameters
bigquery.py::BigQuery
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
databricks.py::Databricks
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
iris.py::Iris
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
mssql.py::MSSQL
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
postgres.py::Postgres
NEW FUNCTIONS:
- added initial support for postgres data warehouses
redshift.py::Redshift
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
snowflake.py::Snowflake
UPDATED FUNCTIONS:
__init__-
additional optional parameter
tokenadded.- adds in support for OAuth based authentication to Snowflake
-
additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
-
optional parameter
private_keyadjusted.- this parameter now exclusively accepts DER format
-
NEW FUNCTIONS:
-
token- getters and setters added for new parameter
-
clear_auth- removes sensitive information from the connection object
Synapse.py::synapse
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
sql_connection.py::SQLConnection
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
sqlalchemy_connection.py::SQLAlchemyConnection
UPDATED FUNCTIONS:
__init__- additional optional parameter
warehouse_idadded.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth- removes sensitive information from the connection object
data_model.py::DataModel
REMOVED FUNCTIONS:
get_column_names- removed in place of the more general
get_columnsfunction
- removed in place of the more general
UPDATED FUNCTIONS:
-
get_features- fixed a bug where the
atscale_typefield incorrectly returned Standard when use_unpublished set to True secondary_attributeflag now included in response- add in checks to block running this function from a perspective with
the
use_publishedparameter set to False - fixed bug where passing
use_publishedas False would return all calculated_measures in the project, instead of those in the data_model.
- fixed a bug where the
-
add_table- added parameter
allow_aggregateswhich allows users to specify if aggregates will be built off of this dataset - updated parameters
schemaanddatabaseto now be required
- added parameter
-
fixed bug where variable referenced before declaration. This effects the following functions
writeback_sparkwriteback_spark_to_spark
NEW FUNCTIONS:
-
clone- creates a clone of the current DataModel in the same project
-
get_columns- returns a dictionary of metadata for all the visible columns in a given dataset
-
get_dimensions- returns a dictionary of metadata for all dimensions in a
DataModel
- returns a dictionary of metadata for all dimensions in a
-
updated_calculated_columns- updates the sql expression for a calculated column
-
update_perspective- updates a perspective to hide the provided inputs
data_model_helpers.py
UPDATED FUNCTIONS:
_check_joins-
now prompts user when a
join_columncannot be automatically mapped to a field or value column. -
impacts the following functions
DataModel.add_query_datasetDataModel.writebackDataModel.writeback_sparkDataModel.writeback_spark_to_sparkDataModel.add_tableDataModel.create_dataset_relationship
-
model_utils.py
UPDATED FUNCTIONS:
_create_dataset_relationship- fixed bug where
writebackfunctions would fail with replacement as thedataset_idwasn't set correctly
- fixed bug where
db_utils.py
UPDATED FUNCTIONS:
_get_key_cols- fixed bug where
writebackfunctions could use wrong quotes when querying to join in missing key columns.
- fixed bug where
project.py::Project
UPDATED FUNCTIONS:
get_snapshots- return type adjusted to be a list of dictionaries in an attempt to make response more useable
NEW FUNCTIONS:
create_data_model- creates a new empty DataModel in the current project
prediction_utils.py
UPDATED FUNCTIONS:
-
join_udf- add parameters
allow_aggregatesandcreate_hinted_aggsso users can specify aggregate behavior
- add parameters
-
write_udf_to_qds- function renamed to
write_snowpark_udf_to_qdsas this function is only snowpark compatible - added
publishparameter to the function
- function renamed to