Attach Amazon EKS Cluster to Kommander

Attach an existing EKS cluster to Kommander

You can attach existing Kubernetes clusters to Kommander. After attaching the cluster, you can use Kommander to examine and manage this cluster. The following procedure shows how to attach an existing Amazon Elastic Kubernetes Service (EKS) cluster to Kommander.

Before you begin

This procedure requires the following items and configurations:

NOTE: This procedure assumes you have an existing and spun up Amazon EKS cluster(s) with administrative privileges. Refer to the Amazon EKS for setup and configuration information.

Attach Amazon EKS Clusters to Kommander

  1. Ensure you are connected to your EKS clusters. Enter the following commands for each of your clusters:

    kubectl config get-contexts
    kubectl config use-context <context for first eks cluster>
    
  2. Confirm kubectl can access the EKS cluster.

    kubectl get nodes
    
  3. Create a service account for Kommander on your EKS cluster.

    kubectl -n kube-system create serviceaccount kommander-cluster-admin
    
  4. Configure your kommander-cluster-admin service account to have cluster-admin permissions. Enter the following command:

    cat << EOF | kubectl apply -f -
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kommander-cluster-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kommander-cluster-admin
      namespace: kube-system
    EOF
    
  5. You must create a kubeconfig file that is compatible with the DKP UI. Enter these commands to set the following environment variables:

    export USER_TOKEN_NAME=$(kubectl -n kube-system get serviceaccount kommander-cluster-admin -o=jsonpath='{.secrets[0].name}')
    export USER_TOKEN_VALUE=$(kubectl -n kube-system get secret/${USER_TOKEN_NAME} -o=go-template='{{.data.token}}' | base64 --decode)
    export CURRENT_CONTEXT=$(kubectl config current-context)
    export CURRENT_CLUSTER=$(kubectl config view --raw -o=go-template='{{range .contexts}}{{if eq .name "'''${CURRENT_CONTEXT}'''"}}{{ index .context "cluster" }}{{end}}{{end}}')
    export CLUSTER_CA=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ index .cluster "certificate-authority-data" }}{{end}}{{ end }}')
    export CLUSTER_SERVER=$(kubectl config view --raw -o=go-template='{{range .clusters}}{{if eq .name "'''${CURRENT_CLUSTER}'''"}}{{ .cluster.server }}{{end}}{{ end }}')
    
  6. Confirm these variables have been set correctly:

    env | grep CLUSTER
    
  7. Create your kubeconfig file to use in the DKP UI. Enter the following commands:

    cat << EOF > kommander-cluster-admin-config
    apiVersion: v1
    kind: Config
    current-context: ${CURRENT_CONTEXT}
    contexts:
    - name: ${CURRENT_CONTEXT}
      context:
        cluster: ${CURRENT_CONTEXT}
        user: kommander-cluster-admin
        namespace: kube-system
    clusters:
    - name: ${CURRENT_CONTEXT}
      cluster:
        certificate-authority-data: ${CLUSTER_CA}
        server: ${CLUSTER_SERVER}
    users:
    - name: kommander-cluster-admin
      user:
        token: ${USER_TOKEN_VALUE}
    EOF
    
  8. Verify the kubeconfig file can access the EKS cluster.

    kubectl --kubeconfig $(pwd)/kommander-cluster-admin-config get all --all-namespaces
    
  9. Copy kommander-cluster-admin-config file contents to your clipboard.

    cat kommander-cluster-admin-config | pbcopy
    

NOTE: If you are not using the Mac OS X operating system, this command will not work. If you are using the Linux operating system, enter the following command:
cat kommander-cluster-admin-config | xclip -selection clipboard

Now that you have kubeconfig, go to the DKP UI and follow these steps below:

  1. From the top menu bar, select your target workspace.

  2. On the Dashboard page, select the Add Cluster option in the Actions dropdown menu at the top right.

  3. Select Attach Cluster.

  4. Select the No additional networking restrictions card.

  5. The Cluster Configuration section of the form accepts a kubeconfig file that you can paste, or upload, into the field. Paste the contents of your clipboard (or upload the file you created) into the kubeconfig field.

  6. The Cluster Name field automatically populates with the name of the cluster in the kubeconfig. You can edit this field with the name you want for your cluster.

  7. Add labels to classify your cluster as needed.

  8. Select Submit to attach your cluster.

NOTE: If a cluster has limited resources to deploy all the federated platform services, it will fail to stay attached in the DKP UI. If this happens, ensure your system has sufficient resources for all pods.

For information on related topics or procedures, refer to the following: