2.4.0 release notes
This release of AI-Link introduces new optimized operations for programmatic interaction with AtScale and expands available methods of authorization:
- Expanded CRUD operation support for data models, perspectives, calculated columns and dimensions.
- Enabled OAuth based connection to Snowflake
Please refer to our API documentation for the latest syntax to use with AI-Link. See below for updates associated with this release.
Data Warehouse Support Updates
- Added support for Postgres data warehouses.
- Added
clear_auth
function to all warehouses, allowing for quick clearing of sensitive authentication information. - Added
token
parameter to Snowflake object constructor to support OAuth based connection
New Python Helper Functions for Programmatic Interaction
- CRUD operation support: added additional functions to interact with various objects in the semantic layer.
- Creating DataModel: Project now has a function to create a blank DataModel, DataModel now has a function to clone itself.
- Perspective updating: allows user to edit what is hidden by a given perspective.
Non-Functional Updates
- UX Quality of Life Improvements: additional customizations for the creation and management of aggs around udfs
- Bug fixes: bug fixes addressing roleplaying and features with different key/value columns when joining objects to the semantic layer
Changelog for Syntax Updates
enums.py
REMOVED CLASS:
LevelType
- functionality moved to the
TimeSteps
class
- functionality moved to the
UPDATED CLASS:
-
TimeSteps
- merged in functionality from former class
LevelType
to avoid duplication
- merged in functionality from former class
-
TimeLevels
- renamed class variable
val
toatscale_value
- renamed class variable
NEW CLASS:
Dimension
- an enum to represent the metadata of a dimension object for dmv queries.
- this does not have direct customer use cases but is publicly visible.
connection.py::Connection
UPDATED FUNCTIONS:
-
_submit_request
- builds in a retry in the event of an internal server error
-
get_connected_schemas
- parameter
database
is now required
- parameter
-
get_connected_tables
- parameters
database
andschema
are now required
- parameters
bigquery.py::BigQuery
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
databricks.py::Databricks
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
iris.py::Iris
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
mssql.py::MSSQL
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
postgres.py::Postgres
NEW FUNCTIONS:
- added initial support for postgres data warehouses
redshift.py::Redshift
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
snowflake.py::Snowflake
UPDATED FUNCTIONS:
__init__
-
additional optional parameter
token
added.- adds in support for OAuth based authentication to Snowflake
-
additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
-
optional parameter
private_key
adjusted.- this parameter now exclusively accepts DER format
-
NEW FUNCTIONS:
-
token
- getters and setters added for new parameter
-
clear_auth
- removes sensitive information from the connection object
Synapse.py::synapse
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
sql_connection.py::SQLConnection
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
sqlalchemy_connection.py::SQLAlchemyConnection
UPDATED FUNCTIONS:
__init__
- additional optional parameter
warehouse_id
added.- this is the AtScale warehouse id to associate with the connection when writing tables
- additional optional parameter
NEW FUNCTIONS:
clear_auth
- removes sensitive information from the connection object
data_model.py::DataModel
REMOVED FUNCTIONS:
get_column_names
- removed in place of the more general
get_columns
function
- removed in place of the more general
UPDATED FUNCTIONS:
-
get_features
- fixed a bug where the
atscale_type
field incorrectly returned Standard when use_unpublished set to True secondary_attribute
flag now included in response- add in checks to block running this function from a perspective with
the
use_published
parameter set to False - fixed bug where passing
use_published
as False would return all calculated_measures in the project, instead of those in the data_model.
- fixed a bug where the
-
add_table
- added parameter
allow_aggregates
which allows users to specify if aggregates will be built off of this dataset - updated parameters
schema
anddatabase
to now be required
- added parameter
-
fixed bug where variable referenced before declaration. This effects the following functions
writeback_spark
writeback_spark_to_spark
NEW FUNCTIONS:
-
clone
- creates a clone of the current DataModel in the same project
-
get_columns
- returns a dictionary of metadata for all the visible columns in a given dataset
-
get_dimensions
- returns a dictionary of metadata for all dimensions in a
DataModel
- returns a dictionary of metadata for all dimensions in a
-
updated_calculated_columns
- updates the sql expression for a calculated column
-
update_perspective
- updates a perspective to hide the provided inputs
data_model_helpers.py
UPDATED FUNCTIONS:
_check_joins
-
now prompts user when a
join_column
cannot be automatically mapped to a field or value column. -
impacts the following functions
DataModel.add_query_dataset
DataModel.writeback
DataModel.writeback_spark
DataModel.writeback_spark_to_spark
DataModel.add_table
DataModel.create_dataset_relationship
-
model_utils.py
UPDATED FUNCTIONS:
_create_dataset_relationship
- fixed bug where
writeback
functions would fail with replacement as thedataset_id
wasn't set correctly
- fixed bug where
db_utils.py
UPDATED FUNCTIONS:
_get_key_cols
- fixed bug where
writeback
functions could use wrong quotes when querying to join in missing key columns.
- fixed bug where
project.py::Project
UPDATED FUNCTIONS:
get_snapshots
- return type adjusted to be a list of dictionaries in an attempt to make response more useable
NEW FUNCTIONS:
create_data_model
- creates a new empty DataModel in the current project
prediction_utils.py
UPDATED FUNCTIONS:
-
join_udf
- add parameters
allow_aggregates
andcreate_hinted_aggs
so users can specify aggregate behavior
- add parameters
-
write_udf_to_qds
- function renamed to
write_snowpark_udf_to_qds
as this function is only snowpark compatible - added
publish
parameter to the function
- function renamed to