Get Started with EKS

Get started by installing a cluster with the default configuration settings on EKS.

This guide provides instructions for getting started with Konvoy to get your Kubernetes cluster up and running with basic configuration requirements on an Elastic Kubernetes Service (EKS) public cloud instance. If you want to customize your EKS environment, see Install EKS Advanced.

Prerequisites

Before starting the Konvoy installation, verify that you have:

  • An x86_64-based Linux or macOS machine with a supported version of the operating system.
  • The dkp binary for Linux, or macOS.
  • Docker version 18.09.2 or later.
  • kubectl for interacting with the running cluster.
  • A valid EKS account with credentials configured.

Configure EKS prerequisites

  1. Follow the steps in IAM Policy Configuration.

  2. Export the AWS Profile with the credentials that you want to use to create the EKS Kubernetes cluster:

    export AWS_PROFILE=<profile>
    

Bootstrap a kind cluster and CAPI controllers

  1. Create a bootstrap cluster:

    dkp create bootstrap --kubeconfig $HOME/.kube/config
    

Name your cluster

Give your cluster a unique name suitable for your environment. In EKS it is critical that the name is unique as no two clusters in the same EKS account can have the same name.

Set the environment variable to be used throughout this documentation:

CLUSTER_NAME=my-eks-cluster

Tips:

  1. To get a list of names in use in your EKS account, use the aws CLI tool. For example:

    aws ec2 describe-vpcs --filter "Name=tag-key,Values=kubernetes.io/cluster" --query "Vpcs[*].Tags[?Key=='kubernetes.io/cluster'].Value | sort(@[*][0])"
    
    [
        "alex-aws-cluster-afe98",
        "sam-aws-cluster-8if9q"
    ]
    
  2. If you want to create a cluster name that matches the example above, use this command. This creates a unique name every time you run it, so use the command with forethought.

    CLUSTER_NAME=$(whoami)-aws-cluster-$(LC_CTYPE=C tr -dc 'a-z0-9' </dev/urandom | fold -w 5 | head -n1)
    echo $CLUSTER_NAME
    
    hunter-aws-cluster-pf4a3
    

Create a new EKS Kubernetes cluster

  1. Make sure your AWS credentials are up to date. Refresh the credentials using this command:

    dkp update bootstrap credentials aws
    
  2. Create a Kubernetes cluster:

    dkp create cluster eks --cluster-name=${CLUSTER_NAME} --additional-tags=owner=$(whoami)
    
  3. (Optional) Specify an authorized key file to have SSH access to the machines.

    The file must contain exactly one entry, as described in this manual.

    You can use the .pub file that complements your private ssh key. For example, use the public key that complements your RSA private key:

    --ssh-public-key-file=${HOME}/.ssh/id_rsa.pub
    

    The default username for SSH access is konvoy. For example, use your own username:

    --ssh-username=$(whoami)
    
  4. Wait for the cluster control-plane to be ready:

    kubectl wait --for=condition=ControlPlaneReady "clusters/${CLUSTER_NAME}" --timeout=20m
    

Explore the new Kubernetes cluster

  1. Fetch the kubeconfig file:

    dkp get kubeconfig -c ${CLUSTER_NAME} > ${CLUSTER_NAME}.conf
    

    NOTE: This kubeconfig is temporary and needs to be renewed either by running the above command with updated credentials and rerunning, or by running the AWS CLI command:

    aws eks --region us-west-2 update-kubeconfig --name default_<name-of-cluster>-control-plane
    
  2. List the Nodes with the command:

    kubectl --kubeconfig=${CLUSTER_NAME}.conf get nodes
    

NOTE: It may take a couple of minutes for the Status to move to Ready while calico-node pods are being deployed.

  1. List the Pods with the command:

    kubectl --kubeconfig=${CLUSTER_NAME}.conf get pods -A
    

(Optional) Move controllers to the newly-created cluster

  1. Deploy CAPI controllers on the worker cluster:

    dkp create bootstrap controllers --with-aws-bootstrap-credentials=false --kubeconfig ${CLUSTER_NAME}.conf
    
  2. Issue the move command:

    dkp move --to-kubeconfig ${CLUSTER_NAME}.conf
    

    NOTE: Remember to specify flag --kubeconfig flag pointing to file ${CLUSTER_NAME}.conf or make sure that the access credentials from this file become the default credentials after the move operation is complete.

    Note that the Konvoy move operation has the following limitations:

    • Only one workload cluster is supported. This also implies that Konvoy does not support moving more than one bootstrap cluster onto the same worker cluster.
    • The Konvoy version used for creating the worker cluster must match the Konvoy version used for deleting the worker cluster.
    • The Konvoy version used for deploying a bootstrap cluster must match the Konvoy version used for deploying a worker cluster.
    • Konvoy only supports moving all namespaces in the cluster; Konvoy does not support migration of individual namespaces.
    • You must ensure that the permissions are sufficient and available to the CAPI controllers running on the worker cluster.
  3. Remove the bootstrap cluster, as the worker cluster is now self-managed:

    dkp delete bootstrap --kubeconfig $HOME/.kube/config
    

Moving controllers back to the temporary bootstrap cluster

Skip this section if the previous step of moving controllers to the newly-created cluster was not run.

  1. Create a bootstrap cluster:

    dkp create bootstrap --kubeconfig $HOME/.kube/config
    
  2. Issue the move command:

    dkp move --from-kubeconfig ${CLUSTER_NAME}.conf --to-kubeconfig $HOME/.kube/config
    

Delete the Kubernetes cluster and cleanup your environment

  1. Delete the provisioned Kubernetes cluster and wait a few minutes:

    dkp delete cluster --cluster-name=${CLUSTER_NAME}
    
  2. Delete the kind Kubernetes cluster:

    dkp delete bootstrap --kubeconfig $HOME/.kube/config