Upgrade Konvoy for DKP Enterprise

Upgrade your Konvoy environment within the DKP Enterprise license.

Prerequisites

The following infrastructure environments are supported:

  • Amazon Web Services (AWS)

  • Microsoft Azure

  • Pre-provisioned environments

Overview

To upgrade Konvoy for DKP Enterprise:

  1. Upgrade the Cluster API (CAPI) components
  2. Upgrade the core addons
  3. Upgrade the Kubernetes version

Run all three steps on the management cluster (Kommander cluster) first. Then, run the second and third steps on additional managed clusters (Konvoy clusters), one cluster at a time using the KUBECONFIG configured for each cluster.

NOTE: For pre-provisioned air-gapped environments, you must run konvoy-image upload artifacts.

For a full list of DKP Enterprise features, see DKP Enterprise.

NOTE: You must maintain your attached clusters manually. Review the documentation from your cloud provider for more information.

Upgrade the CAPI components

New versions of DKP come pre-bundled with newer versions of CAPI, newer versions of infrastructure providers, or new infrastructure providers. When using a new version of the DKP CLI, upgrade all of these components first.

If you are running on more than one management cluster (Kommander cluster), you must upgrade the CAPI components on each of these clusters.

IMPORTANT:Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Run the following upgrade command for the CAPI components.

dkp upgrade capi-components

The output resembles the following:

✓ Upgrading CAPI components
✓ Waiting for CAPI components to be upgraded
✓ Initializing new CAPI components
✓ Deleting Outdated Global ClusterResourceSets

If the upgrade fails, review the prerequisites section and ensure that you’ve followed the steps in the DKP upgrade overview.

Upgrade the core addons

To install the core addons, DKP relies on the ClusterResourceSet Cluster API feature. In the CAPI component upgrade, we deleted the previous set of outdated global ClusterResourceSets because prior to DKP 2.2 some addons were installed using a global configuration. In order to support individual cluster upgrades, DKP 2.2 now installs all addons with a unique set of ClusterResourceSet and corresponding referenced resources, all named using the cluster’s name as a suffix. For example: calico-cni-installation-my-aws-cluster.

WARNING: If you have modified any of the clusterResourceSet definitions, these changes will not be preserved when running the command dkp upgrade addons. You must use the --dry-run -o yaml options to save the new configuration to a file and remake the same changes upon each upgrade.

Your cluster comes preconfigured with a few different core addons that provide functionality to your cluster upon creation. These include: CSI, CNI, Cluster Autoscaler, and Node Feature Discovery. New versions of DKP may come pre-bundled with newer versions of these addons. Perform the following steps to update these addons. If you have any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.

IMPORTANT:Ensure your dkp configuration references the management cluster where you want to run the upgrade by setting the KUBECONFIG environment variable, or using the --kubeconfig flag, in accordance with Kubernetes conventions.

Upgrade the core addons in a cluster using the ‘dkp upgrade addons’ command specifying the cluster infrastructure (choose [aws, azure, preprovisioned]) and the name of the cluster.

Examples:

export CLUSTER_NAME=my-azure-cluster
dkp upgrade addons azure --cluster-name=${CLUSTER_NAME}

OR

export CLUSTER_NAME=my-aws-cluster
dkp upgrade addons aws --cluster-name=${CLUSTER_NAME}

The output for the AWS example should be similar to:

Generating addon resources
clusterresourceset.addons.cluster.x-k8s.io/calico-cni-installation-my-aws-cluster upgraded
configmap/calico-cni-installation-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/tigera-operator-my-aws-cluster upgraded
configmap/tigera-operator-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/aws-ebs-csi-my-aws-cluster upgraded
configmap/aws-ebs-csi-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/cluster-autoscaler-my-aws-cluster upgraded
configmap/cluster-autoscaler-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/node-feature-discovery-my-aws-cluster upgraded
configmap/node-feature-discovery-my-aws-cluster upgraded
clusterresourceset.addons.cluster.x-k8s.io/nvidia-feature-discovery-my-aws-cluster upgraded
configmap/nvidia-feature-discovery-my-aws-cluster upgraded

IMPORTANT: If your cluster was previously upgraded from 1.8 and was previously using the EBS CSI driver, you must also run the following commands to upgrade the driver.

export CLUSTER_NAME=my-aws-cluster
helm uninstall -n kube-system awsebscsiprovisioner-kubeaddons
kubectl label cluster $CLUSTER_NAME konvoy.d2iq.io/csi=aws-ebs

See also

DKP upgrade addons

Once complete, begin upgrading the Kubernetes version.

Upgrade the Kubernetes version

When upgrading the Kubernetes version of a cluster, first upgrade the control plane and then the node pools. If you have any additional managed clusters, you will need to upgrade the core addons and Kubernetes version for each one.

NOTE: If an AMI was specified when initially creating a cluster, you must build a new one with Konvoy Image Builder and pass it with --ami.

  1. Upgrade the Kubernetes version of the control plane.

    dkp update controlplane aws --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.22.8
    

    The output should be similar to:

    Updating control plane resource controlplane.cluster.x-k8s.io/v1beta1, Kind=KubeadmControlPlane default/my-aws-cluster-control-plane
    Waiting for control plane update to finish.
     ✓ Updating the control plane
    
  2. Upgrade the Kubernetes version of each of your node pools. Get a list of all node pools available in your cluster by running the following command:

    dkp get nodepool --cluster-name ${CLUSTER_NAME}
    
  3. Replace my-nodepool with the name of the node pool.

    export NODEPOOL_NAME=<my-nodepool>
    
    dkp update nodepool aws ${NODEPOOL_NAME} --cluster-name=${CLUSTER_NAME} --kubernetes-version=v1.22.8
    

The output should be similar to:

Updating node pool resource cluster.x-k8s.io/v1beta1, Kind=MachineDeployment default/my-aws-cluster-my-nodepool
Waiting for node pool update to finish.
 ✓ Updating the my-aws-cluster-my-nodepool node pool

Repeat this step for each additional node pool.

For the overall process for upgrading to the latest version of DKP, refer back to DKP Upgrade