Configure HTTP Proxy

Configure HTTP proxy for the Kommander cluster(s)

Kommander supports environments where access to the Internet is restricted, and must be made through an HTTP/HTTPS proxy.

In these environments, you must configure Kommander to use the HTTP/HTTPS proxy. In turn, Kommander configures all platform services to use the HTTP/HTTPS proxy.

NOTE: Kommander follows a common convention for using an HTTP proxy server. The convention is based on three environment variables, and is supported by many, though not all, applications.

  • HTTP_PROXY: the HTTP proxy server address
  • HTTPS_PROXY: the HTTPS proxy server address
  • NO_PROXY: a list of IPs and domain names that are not subject to proxy settings

Prerequisites

In the examples below:

  1. The curl command-line tool is available on the host.
  2. The proxy server address is http://proxy.company.com:3128.
  3. The proxy server address uses the http scheme.
  4. The proxy server can reach www.google.com using HTTP or HTTPS.

Verify the cluster nodes can access the Internet through the proxy server

On each cluster node, run:

curl --proxy http://proxy.company.com:3128 --head http://www.google.com
curl --proxy http://proxy.company.com:3128 --head https://www.google.com

If the proxy is working for HTTP and HTTPS, respectively, the curl command returns a 200 OK HTTP response.

Enable gatekeeper

Gatekeeper acts as a Kubernetes mutating webhook. You can use this to mutate the Pod resources with HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables.

Create/Update the chart configuration values.yaml file. Set the following values to enable gatekeeper:

cat << EOF >> values.yaml
services:
  gatekeeper:
    # Enables gatekeeper service in management cluster
    enabled: true
controller:
  containers:
    manager:
      extraArgs:
        # Enables gatekeeper service in managed cluster
        default-workspace-app-deployments: "gatekeeper-0.6.8"
EOF

Save the above overrides in values.yamlfile. These can then be supplied to helm install command.

You can create a kommander namespace, or the namespace where Kommander will be installed, and then label it such that gatekeeper mutation is active on the namespace.

kubectl create namespace kommander
kubectl label namespace kommander gatekeeper.d2iq.com/mutate=pod-proxy

Create the gatekeeper-overrides configmap in the kommander namespace as described in this section before proceeding to installing Kommander.

Enable gatekeeper for attached clusters

To enable gatekeeper installation in attached clusters, create the following overrides configmap on the host cluster:

cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
  name: kommander-0.1.0-overrides
  namespace: kommander
data:
  values.yaml: |
    ---
    attached:
      prerequisites:
        gatekeeper:
          enabled: true
EOF

This ensures that gatekeeper is deployed in attached clusters.

Configure Workspace (or Project) in which you want to use proxy

To have gatekeeper mutate the manifests, create the Workspace (or Project) with the following label:

labels:
  gatekeeper.d2iq.com/mutate: "pod-proxy"

This can be done when creating the Workspace (or Project) from the UI OR by running the following command from the CLI once the namespace is created:

kubectl label namespace <WORKSPACE_NAMESPACE> "gatekeeper.d2iq.com/mutate=pod-proxy"

Configure attached clusters with proxy configuration

In order to ensure that gatekeeper is deployed before everything else in the attached clusters, you must manually create the exact namespace of the workspace in which the cluster is going to be attached, before attaching the cluster:

Execute the following command in the attached cluster before attaching it to the host cluster:

kubectl create namespace <WORKSPACE_NAMESPACE>

Then, to configure the pods in this namespace to use proxy configuration, create the gatekeeper-overrides configmap described in the next section before attaching the cluster to the host cluster. You must label the workspace with gatekeeper.d2iq.com/mutate=pod-proxy when creating it so that gatekeeper deploys a validatingwebhook to mutate the pods with proxy configuration.

Create gatekeeper configmap in Workspace namespace

To configure gatekeeper such that these environment variables are mutated in the pods, create the following configmap in target Workspace:

export NAMESPACE=<workspace-namespace>
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: gatekeeper-overrides
  namespace: ${NAMESPACE}
data:
  values.yaml: |
    ---
    # enable mutations
    mutations:
      enable: true
      enablePodProxy: true
      podProxySettings:
        noProxy: "127.0.0.1,192.168.0.0/16,10.0.0.0/16,10.96.0.0/12,localhost,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local,kubecost-prometheus-server.kommander,logging-operator-logging-fluentd.kommander.svc,elb.amazonaws.com"
        httpProxy: "http://proxy.company.com:3128"
        httpsProxy: "http://proxy.company.com:3128"
      excludeNamespacesFromProxy: []
      namespaceSelectorForProxy:
        "gatekeeper.d2iq.com/mutate": "pod-proxy"
EOF

Set the httpProxy and httpsProxy environment variables to the address of the HTTP and HTTPS proxy server, respectively. Set the noProxy environment variable to the addresses that should be accessed directly, not through the proxy.

IMPORTANT: Both the HTTP and HTTPS proxy server address must use the http scheme.

NOTE: To ensure that core components work correctly, always add these addresses to the noProxy:

  • Loopback addresses (127.0.0.1 and localhost)
  • Kubernetes API Server addresses
  • Kubernetes Pod IPs (for example, 192.168.0.0/16). This comes from two places:
    • Calico pod CIDR - Defaults to 192.168.0.0/16
    • The podSubnet is configured in CAPI objects and needs to match above Calico's - Defaults to 192.168.0.0/16 (same as above)
  • Kubernetes Service addresses (for example, 10.96.0.0/12, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, .svc, .svc.cluster, .svc.cluster.local)
In addition to above, following are needed when installing on AWS:
  • The default VPC CIDR range of 10.0.0.0/16
  • kube-apiserver internal/external ELB address

IMPORTANT: The NO_PROXY variable contains the Kubernetes Services CIDR. This example uses the default CIDR, 10.96.0.0/12. If your cluster's CIDR is different, update the value in NO_PROXY.

LIMITATION: Based on the order in which the Gatekeeper Deployment is Ready (in relation to other Deployments), not all the core services are guaranteed to be mutated with the proxy environment variables. Only the user deployed workloads are guaranteed to be mutated with the proxy environment variables. If you need a core service to be mutated with your proxy environment variables, you can restart the AppDeployment for that core service. This behavior will be fixed in a future release of Kommander.

Configure your applications

In a default installation with gatekeeper enabled, you can have proxy environment variables applied to all your pods automatically by adding the following label to your namespace:

"gatekeeper.d2iq.com/mutate": "pod-proxy"

No further manual changes are required.

IMPORTANT: If Gatekeeper is not installed, and you need to use a http proxy you must manually configure your applications as described further in this section.

Manually configure your application

Some applications follow the convention of HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables.

In this example, the environment variables are set for a container in a Pod:

apiVersion: v1
kind: Pod
spec:
  containers:
  - name: example-container
    env:
    - name: HTTP_PROXY
      value: "http://proxy.company.com:3128"
    - name: HTTPS_PROXY
      value: "http://proxy.company.com:3128"
    - name: NO_PROXY
      value: "10.0.0.0/18,localhost,127.0.0.1,169.254.169.254,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,.svc,.svc.cluster,.svc.cluster.local"

See Define Environment Variables for a Container for more details.