Upgrade Requirements
Carefully read this information before upgrading AtScale to the corresponding release.
It is highly recommended that you upgrade container-based AtScale consecutively and do not skip upgrades for intermediate releases. AtScale is introducing many innovations and corrections at this time. Upgrading to a version while skipping an intermediate one could introduce uncertainties despite the documented requirements for specific upgrades. This recommendation is expected to change in late 2025; check back as upgrade requirements are minimized and this recommendation subsides.
It is recommended that you backup your AtScale PostgreSQL instance before performing an upgrade.
C2025.6.1
This release contains a critical issue that was resolved in C2025.6.2. If you are on C2025.6.1 or considering upgrading to it, you should instead upgrade to C2025.6.2. For more information, refer to the C2025.6.2 resolved issues.
C2025.6.0
This release contains a critical issue that was resolved in C2025.6.2. If you are on C2025.6.0 or considering upgrading to it, you should instead upgrade to C2025.6.2. For more information, refer to the C2025.6.2 resolved issues.
Breaking change: Increased storage size requirements for Redis
In C2025.6.0, the storage size requirements for Redis increased to improve reliability in the default values.yaml
file. Because Kubernetes doesn't allow Helm upgrades to resize PVCs for StatefulSets, you must stop and delete the Redis PVCs and StatefulSets before upgrading AtScale. This enables the upgrade process to create a new PVC with the proper size.
This procedure is only required if you have not overridden redis.master.persistence.size
and redis.replicas.persistence.size
in your values override file. If you have overrides defined for these values, no action is required.
To delete the Redis PVCs and StatefulSets:
-
On your Kubernetes cluster, run the following to identify the Redis resources:
kubectl get statefulset -n <atscale_namespace>
kubectl get pvc -n <atscale_namespace>Where
<atscale_namespace>
is the namespace in which AtScale is installed.This should return the following StatefulSets:
redis-master
redis-replicas
And the following PVCs:
redis-data-redis-master-0
redis-data-redis-replicas-0
-
Run the following to safely stop Redis:
kubectl scale statefulset redis-master --replicas=0 -n <atscale_namespace>
kubectl scale statefulset redis-replicas --replicas=0 -n <atscale_namespace> -
Delete the Redis StatefulSets:
kubectl delete statefulset redis-master -n <atscale_namespace>
kubectl delete statefulset redis-replicas -n <atscale_namespace> -
Delete the Redis PVCs:
kubectl delete pvc redis-data-redis-master-0 -n <atscale_namespace>
kubectl delete pvc redis-data-redis-replicas-0 -n <atscale_namespace> -
Upgrade AtScale to C2025.6.0:
helm upgrade atscale oci://docker.io/atscaleinc/atscale --version 2025.6.0 -n <atscale_namespace> -f <override_file>
Where
<atscale_namespace>
is the namespace in which AtScale is installed, and<override_file>
is your values override file. -
Verify that the new Redis PVC was created with the updated size:
kubectl describe pvc redis-data-redis-master-0 -n <atscale_namespace> | grep -i "capacity"
The output should be similar to:
Capacity: 16Gi
Configuration setting migrations
In C2025.6.0, the query.planning.semiAdditive.multiMeasureHandling
settings replaced the Boolean query.planning.semiAdditive.forceIndividualMeasureCheck
settings at both the global and model levels. Existing configurations are automatically migrated as follows:
query.planning.semiAdditive.forceIndividualMeasureCheck: true
is migrated toquery.planning.semiAdditive.multiMeasureHandling: measure
.query.planning.semiAdditive.forceIndividualMeasureCheck: false
is migrated toquery.planning.semiAdditive.multiMeasureHandling: model
.
For more information on the new settings, see Query Settings (global-level) and Query Settings (model-level).
ATSCALE-25753
C2025.5.0
This release contains a critical issue that was resolved in C2025.6.2. If you are on C2025.5.0 or considering upgrading to it, you should instead upgrade to C2025.6.2. For more information, refer to the C2025.6.2 resolved issues.
Upgrade prerequisites and breaking changes
Before upgrading to AtScale C2025.5.0, you need to prepare by saving some of your existing secrets to your values override file.
-
Copy all Keycloak users and client secrets:
-
Obtain the secrets for the Keycloak users
atscale-kc-admin
andkc-admin
from the existing secretkeycloak-secret
:kubectl get secret -n <atscale-release-namespace> keycloak-secret -o json | jq -r .data.KC_ATSCALE_ADMIN_PASSWORD | base64 -d
kubectl get secret -n <atscale-release-namespace> keycloak-secret -o json | jq -r .data.KEYCLOAK_ADMIN_PASSWORD | base64 -d -
Obtain the AtScale client secrets:
kubectl get secret -n <atscale-release-namespace> keycloak-api-secret -o json | jq -r .data.clientSecret | base64 -d
kubectl get secret -n <atscale-release-namespace> keycloak-engine-secret -o json | jq -r .data.clientSecret | base64 -d
kubectl get secret -n <atscale-release-namespace> keycloak-entitlement-secret -o json | jq -r .data.clientSecret | base64 -d
kubectl get secret -n <atscale-release-namespace> keycloak-api-secret -o json | jq -r .data.publicApiClientSecret | base64 -d
kubectl get secret -n <atscale-release-namespace> keycloak-sml-secret -o json | jq -r .data.clientSecret | base64 -d
-
-
Add the secrets you obtained above to your values override file:
global:
atscale:
keycloak:
users:
atscale:
# defaults to atscale-kc-admin
username: "your-username"
# kubectl get secret -n <atscale-release-namespace> keycloak-secret -o json | jq -r .data.KC_ATSCALE_ADMIN_PASSWORD | base64 -d
password: "your-password"
admin:
# defaults to kc-admin
username: "your-username"
# kubectl get secret -n <atscale-release-namespace> keycloak-secret -o json | jq -r .data.KEYCLOAK_ADMIN_PASSWORD | base64 -d
password: "your-password"
clients:
# kubectl get secret -n <atscale-release-namespace> keycloak-api-secret -o json | jq -r .data.clientSecret | base64 -d
api:
clientSecret: "your-client-secret"
# kubectl get secret -n <atscale-release-namespace> keycloak-engine-secret -o json | jq -r .data.clientSecret | base64 -d
engine:
clientSecret: "your-client-secret"
# kubectl get secret -n <atscale-release-namespace> keycloak-entitlement-secret -o json | jq -r .data.clientSecret | base64 -d
entitlement:
clientSecret: "your-client-secret"
# kubectl get secret -n <atscale-release-namespace> keycloak-sml-secret -o json | jq -r .data.clientSecret | base64 -d
modeler:
clientSecret: "your-client-secret"
# kubectl get secret -n <atscale-release-namespace> keycloak-api-secret -o json | jq -r .data.publicApiClientSecret | base64 -d
publicApi:
clientSecret: "your-client-secret" -
If you have a multi-node cluster:
-
If you are running on a multi-node cluster that doesn’t have zone redundant storage (ZFS, EFS, NFS, Regional PD, etc.):
-
If you don't have a default storage class set, specify the one you use in the values override file. For example:
# AN EBS Storage Example (Not zone redundant)
telemetry:
persistence:
storageClass: "gp2"
accessModes:
- ReadWriteOnce -
Ensure that your engine pod and OTel pods are running on the same nodes, then add their node selectors to the values override file. You can obtain the node labels by running
kubectl get nodes --show-labels
.atscale-engine:
nodeSelector:
node-label: node-label-value
telemetry:
nodeSelector:
node-label: node-label-value
-
-
If you are running on a multi-node cluster that does have zone redundant storage, make sure that
accessModes
underglobal.atscale.telemetry.persistence
is set toReadWriteMany
. For example:# AN EFS Storage (zone redundant)
telemetry:
persistence:
storageClass: "efs-sc"
accessModes:
- ReadWriteManyIn this case, node selectors are not applicable because you can write to this PV from any node.
-
-
Upgrade AtScale.
Required changes in the Identity Broker
After upgrading, you must make the following change in the Identity Broker:
- Log in to AtScale as an admin, open the main menu, and select Security. The Identity Broker opens.
- In the left-side navigation, click Clients.
- Open the
realm-management
client. - Go to the Authorization tab, then click the Policies tab.
- Click on
public-api-token-exchange
. - In the Clients field, add the
atscale-engine
client. - Click Save.
If AtScale is configured with an external IdP (Entra ID, Okta), you must also do the following:
- In the Identity Broker, click Identity providers in the left-side navigation, then open your IdP.
- Go to the Permissions tab and enable the Permissions enabled option.
- In the Permissions list section, click the
token-exchange
permission. - In the Policies field, add the
public-api-token-exchange
policy. - Click Save.
Additionally, if your IdP is configured with OIDC, you must make the following change:
- In the Identity Broker, click Identity providers in the left-side navigation, then open your IdP.
- On the Settings tab, expand the Advanced section.
- In the Scopes field, add the following scopes:
openid profile email offline_access
. - Click Save.
New commands for obtaining AtScale admin users and passwords
In C2025.5.0, the AtScale admin users and clients have been moved to a new secret. After upgrading, you can use the following commands to obtain the default admin users and their passwords.
# AtScale Admin USER
kubectl get secret keycloak-users-secret -n <release-namespace> -o jsonpath="{.data.atscaleAdmin}" | base64 -d
# AtScale Admin Password
kubectl get secret keycloak-users-secret -n <release-namespace> -o jsonpath="{.data.atscaleAdminPassword}" | base64 -d
# Keycloak Admin User
kubectl get secret keycloak-users-secret -n <release-namespace> -o jsonpath="{.data.keycloakAdmin}" | base64 -d
# Keycloak Admin Password
kubectl get secret keycloak-users-secret -n <release-namespace> -o jsonpath="{.data.keycloakAdminPassword}" | base64 -d
Troubleshooting
The following are common issues you may encounter when upgrading to C2025.5.0.
-
In this release, the StatefulSets for Redis and Keycloak changed. When upgrading, if you receive an error like
Error: UPGRADE FAILED: cannot patch "redis-master" with kind StatefulSet
, it is because the upgrade process is attempting to make changes to the StatefulSets and your system is preventing it from doing so.To fix the issue, you need to remove your StatefulSets:
kubectl get statefulsets -n <your-namespace>
# Grab the names from the list
kubectl delete statefulset <name of statefulset>NoteThis is required for Redis, and may be needed for Keycloak, as well. It is safe to delete the StatefulSets and run the upgrade again.
-
If you are performing multiple upgrades for any reason, ensure the client secrets for the following have updated in your pods:
atscale-engine
,atscale-entitlement
,atscale-sml
, andatscale-api
. These should match the values you specified in the values override file above.It is safe to terminate the pods so they pick up the current secrets.
C2025.3.1
Breaking change: ATSCALE_CRYPTO_KEY must be a 64 character hexadecimal string
Before upgrading to C2025.3.1 or later, verify that the ATSCALE_CRYPTO_KEY
secret is a 64 character hexadecimal string. This environment variable is stored in either the tableau-secret
or the global.atscale.encryption
section of the values override file. If this variable is not a 64 character hexadecimal string, it should be replaced.
Replacing the variable may cause connections to Tableau and LLM to be lost. If this occurs, they must be recreated.
Update existing Tableau Server definitions
In C2025.3.1, the workflow for configuring Tableau Server definitions changed. Any existing Tableau Server definitions must be reconfigured using the new workflow after upgrading to C2025.3.1 or later. For instructions, see Configuring Connections to Tableau Server.
ATSCALE-25647
C2025.3.0
Breaking change: Nginx Ingress Controller replaced with Nginx Proxy
In C2025.3.0, the Nginx Ingress Controller was removed and replaced with Nginx Proxy. As part of this change, the nginxproxy
section was removed from the AtScale values file, and all proxy configurations moved to the section atscale-proxy
. This brings more flexibility to the AtScale deployment, mainly:
- The ability to install AtScale in multiple namespaces.
- A single point of entry for the AtScale application.
- Automatic generation of self-signed TLS secrets.
- Use of HTTP/1.1 or 2.0 by default.
- Ability to expose the AtScale application as either a service or an ingress.
- Exposure of Nginx Proxy parameters on the Helm chart.
If you currently use nginxproxy
, before upgrading to AtScale C2025.3.0 from an earlier version, you must migrate all the settings in the nginxproxy
section of the values override file to the atscale-proxy
section. Notably:
- Migrate all annotations from
nginxproxy.service.annotations
toatscale-proxy.service.annotations
. - Redirect all routes created to the
nginx
service to theatscale-proxy
service. - Review OpenShift Routes, Istio VirtualServices, and other environment-specific routing Custom Resources, and point them to the
atscale-proxy
service.
If you do not use nginxproxy
, no action is needed.
For more information on configuring the new atscale-proxy
service, see Configuring the AtScale Proxy Service.
Breaking change: caCerts in values override file must be Base64-encoded
As of C2025.3.0, the value of the caCerts
property in the values override file must be Base64
-encoded.
Before upgrading to C2025.3.0 from an earlier version, you must ensure that the value of this property is in Base64
format. For instructions on configuring your values override file, see Creating a Values Override File. For instructions on updating the values override file, see Updating the Values Override File.
C2025.2.0
(Optional) Configure an external Grafana instance for monitoring
As of C2025.2.0, Grafana (used to provide monitoring for the AtScale services) has been deprecated, and is no longer included in the AtScale Helm chart. If you want to continue to use Grafana for monitoring, you must configure an external instance. For instructions, see Configuring an External Grafana Instance for Monitoring AtScale.
This is an optional configuration, and is not required for upgrading to C2025.2.0 or later.