Workloads

Deploying Operators, Workloads, and Applications

This section describes different application and service workloads and how to install them onto Konvoy. It assumes familiarity with Kubernetes CLI tool (kubectl), applications, manifest files and conrollers.

Workloads and Applications

There are various types of applications, with different styles of packaging and configurations, but in the end running applications on Kubernetes meanings running containers in pods. The different types of applications include:

  • Stateless
  • Stateful
  • Control Plane extensions

A Stateless service is a service which does not have a need for managing state to disk. Based on the nature, scheduling stateless services is commonly less constraining and their failover does not require any coordination.

A Stateful service is a service which requires a way to manage storage. Beyond storage, stateful applications often require extra coordination for scaling (up or down) and failure recovery. This extra coordination can range from the need for storage coordination to service specific needs such as data rebalancing, snapshotting, resharding, etc. The extra coordiation needed is specific to the service and can be managed manually or through a controller which provides a control plane extension to Kubernetes.

Kubernetes is a control plane for managing worker nodes and pods in the cluster. The basic control plane is domain agnostic, meaning it manages generic operators, but it can not manage application specific control needs. For this reason, Kubernetes provides the ability to extend the Kubernetes cluster. There are 2 control plane extensions that are the most impactful to managing stateful services which are: kube-apiserver extensions called custom resource definitions (CRD) and controllers. CRDs provide the ability to customize the Kubernetes API with new kinds of objects. Controllers provide a way interact with new kinds of objects, observe new and existing kinds of objects, and to respond to the state of the cluster in application specific ways. The combination of these 2 Kubernetes extensions provides a way to automate a stateful service in a domain specific way. When used in this way, this service is referred to as a Kubernetes Operator.

Workload Packaging

The core basic form for declaratively expressing a workload is a YAML manifest file(s). It is common to use versionable manifest files for standard in-house workflow deployments. Details for working with manifests can be found under user workloads.

Although there are different forms of packages, it is useful to know that at the core, they generally are a way to group a set of manifest files unto a bundle for deployment as a unit. Common packaging used in this way includes:

  • Helm - Defines a bundle as a “chart.” There are charts for stateless, stateful and control plane extensions. Although there are charts for stateful services, it is best practice to not use them in production, using their operators as a better alternative. For more details on working with Helm, read the Helm workload section.
  • KUDO Packages - Defines a way to bundle an ordered set of manifests as a KUDO Operator. KUDO means Kubernetes Universal Declarative Operator, which defines a way to create an operator through declaration. KUDO packages can be used from a URL, from the file system, and from a KUDO repository. The basics for working with KUDO is available at using KUDO with verbose details at https://kudo.dev.
  • Operator SDK - Defines another packaging mechanism for deploying operator SDK operators. Operator SDK requires the installation of OLM which is accomplished through manifest files. Operators can be found at https://operatorhub.io/. For more details from working with Operator SDK.

Before you begin

Deploying a workload into a Kubernetes cluster, regardless of packaging, requires the following prerequisites:

  • The client must be configured for the server and namespace
  • You must have authorization for the server and/or namespace
  • Resources must be available for the workload within the allowed quota
  • The workload must be scheduleable according to the configured policies
  • The proper client tooling must be installed, which can differ depending on package type. At a minimum, kubectl needs to be installed.