Migration Guide Latest
Migrating from KEDA v1 to v2
Please note that you can not run both KEDA v1 and v2 on the same Kubernetes cluster. You need to uninstall KEDA v1 first, in order to install and use KEDA v2.
💡 NOTE: When uninstalling KEDA v1 make sure v1 CRDs are uninstalled from the cluster as well.
KEDA v2 is using a new API namespace for its Custom Resources Definitions (CRD): keda.sh
instead of keda.k8s.io
and introduces a new Custom Resource for scaling of Jobs. See full details on KEDA Custom Resources here.
Here’s an overview of what’s changed:
- Scaling of Deployments
- Scaling of Jobs
- Improved flexibility & usability of trigger metadata
- Scalers
- TriggerAuthentication
Scaling of Deployments
In order to scale Deployments
with KEDA v2, you need to do only a few modifications to existing v1 ScaledObjects
definitions, so they comply with v2:
- Change the value of
apiVersion
property fromkeda.k8s.io/v1alpha1
tokeda.sh/v1alpha1
- Rename property
spec.scaleTargetRef.deploymentName
tospec.scaleTargetRef.name
- Rename property
spec.scaleTargetRef.containerName
tospec.scaleTargetRef.envSourceContainerName
- Label
deploymentName
(inmetadata.labels.
) is no longer needed to be specified on v2 ScaledObject (it was mandatory on older versions of v1)
Please see the examples below or refer to the full v2 ScaledObject Specification
Example of v1 ScaledObject
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: { scaled-object-name }
labels:
deploymentName: { deployment-name }
spec:
scaleTargetRef:
deploymentName: { deployment-name }
containerName: { container-name }
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 100
triggers:
# {list of triggers to activate the deployment}
Example of v2 ScaledObject
apiVersion: keda.sh/v1alpha1 # <--- Property value was changed
kind: ScaledObject
metadata: # <--- labels.deploymentName is not needed
name: { scaled-object-name }
spec:
scaleTargetRef:
name: { deployment-name } # <--- Property name was changed
envSourceContainerName: { container-name } # <--- Property name was changed
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 100
triggers:
# {list of triggers to activate the deployment}
Scaling of Jobs
In order to scale Jobs
with KEDA v2, you need to do only a few modifications to existing v1 ScaledObjects
definitions, so they comply with v2:
- Change the value of
apiVersion
property fromkeda.k8s.io/v1alpha1
tokeda.sh/v1alpha1
- Change the value of
kind
property fromScaledObject
toScaledJob
- Remove property
spec.scaleType
- Remove properties
spec.cooldownPeriod
andspec.minReplicaCount
You can configure successfulJobsHistoryLimit
and failedJobsHistoryLimit
. They will remove the old job histories automatically.
Please see the examples below or refer to the full v2 ScaledJob Specification
Example of v1 ScaledObject for Jobs scaling
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: { scaled-object-name }
spec:
scaleType: job
jobTargetRef:
parallelism: 1
completions: 1
activeDeadlineSeconds: 600
backoffLimit: 6
template:
# {job template}
pollingInterval: 30
cooldownPeriod: 300
minReplicaCount: 0
maxReplicaCount: 100
triggers:
# {list of triggers to create jobs}
Example of v2 ScaledJob
apiVersion: keda.sh/v1alpha1 # <--- Property value was changed
kind: ScaledJob # <--- Property value was changed
metadata:
name: { scaled-job-name }
spec: # <--- spec.scaleType is not needed
jobTargetRef:
parallelism: 1
completions: 1
activeDeadlineSeconds: 600
backoffLimit: 6
template:
# {job template}
pollingInterval: 30 # <--- spec.cooldownPeriod and spec.minReplicaCount are not needed
successfulJobsHistoryLimit: 5 # <--- property is added
failedJobsHistoryLimit: 5 # <--- Property is added
maxReplicaCount: 100
triggers:
# {list of triggers to create jobs}
Improved flexibility & usability of trigger metadata
We’ve introduced more options to configure trigger metadata to give users more flexibility.
💡 NOTE: Changes only apply to trigger metadata and don’t impact usage of
TriggerAuthentication
Here’s an overview:
Scaler | 1.x | 2.0 | |
---|---|---|---|
azure-blob | connection (Default: AzureWebJobsStorage ) | connectionFromEnv | |
azure-monitor | activeDirectoryClientId activeDirectoryClientPassword | activeDirectoryClientId activeDirectoryClientIdFromEnv activeDirectoryClientPasswordFromEnv | |
azure-queue | connection (Default: AzureWebJobsStorage) | connectionFromEnv | |
azure-servicebus | connection | connectionFromEnv | |
azure-eventhub | storageConnection (Default: AzureWebJobsStorage ) connection (Default: EventHub ) | storageConnectionFromEnv connectionFromEnv | |
aws-cloudwatch | awsAccessKeyID (Default: AWS_ACCESS_KEY_ID ) awsSecretAccessKey (Default: AWS_SECRET_ACCESS_KEY ) | awsAccessKeyID awsAccessKeyIDFromEnv awsSecretAccessKeyFromEnv | |
aws-kinesis-stream | awsAccessKeyID (Default: AWS_ACCESS_KEY_ID ) awsSecretAccessKey (Default: AWS_SECRET_ACCESS_KEY ) | awsAccessKeyID awsAccessKeyIDFromEnv awsSecretAccessKeyFromEnv | |
aws-sqs-queue | awsAccessKeyID (Default: AWS_ACCESS_KEY_ID ) awsSecretAccessKey (Default: AWS_SECRET_ACCESS_KEY ) | awsAccessKeyID awsAccessKeyIDFromEnv awsSecretAccessKeyFromEnv | |
kafka | (none) | (none) | |
rabbitmq | apiHost host | apiHost host hostFromEnv | |
prometheus | (none) | (none) | |
cron | (none) | (none) | |
redis | address host port password | address addressFromEnv host hostFromEnv port passwordFromEnv | |
redis-streams | address host port password | address addressFromEnv host hostFromEnv port passwordFromEnv | |
gcp-pubsub | credentials | credentialsFromEnv | |
external | (any matching value) | (any matching value with FromEnv suffix) | |
liiklus | (none) | (none) | |
stan | (none) | (none) | |
huawei-cloudeye | (none) | (none) | |
postgresql | connection password | connectionFromEnv passwordFromEnv | |
mysql | connectionString password | connectionStringFromEnv passwordFromEnv |
Scalers
Azure Service Bus
queueLength
was renamed tomessageCount
Kafka
authMode
property was replaced withsasl
andtls
properties. Please refer documentation for Kafka Authentication Parameters details.
RabbitMQ
In KEDA 2.0 the RabbitMQ scaler has only host
parameter, and the protocol for communication can be specified by
protocol
(http or amqp). The default value is amqp
. The behavior changes only for scalers that were using HTTP
protocol.
Example of RabbitMQ trigger before 2.0:
triggers:
- type: rabbitmq
metadata:
queueLength: "20"
queueName: testqueue
includeUnacked: "true"
apiHost: "https://guest:password@localhost:443/vhostname"
The same trigger in 2.0:
triggers:
- type: rabbitmq
metadata:
queueLength: "20"
queueName: testqueue
protocol: "http"
host: "https://guest:password@localhost:443/vhostname"
TriggerAuthentication
In order to use Authentication via TriggerAuthentication
with KEDA v2, you need to change:
- Change the value of
apiVersion
property fromkeda.k8s.io/v1alpha1
tokeda.sh/v1alpha1
For more details please refer to the full v2 TriggerAuthentication Specification