Security

Creating service accounts and granting permissions

DC/OS Kafka ZooKeeper Security

  • The DC/OS Kafka ZooKeeper service allows you to create a service account to configure access for Kafka ZooKeeper. The service allows you to create and assign permissions as required for access.

  • The DC/OS Kafka ZooKeeper service supports ZooKeeper’s native Kerberos authentication mechanism. The service provides automation and orchestration to simplify the usage of these important features, with both Client-Server and Server-Server mutual authentication supported.

An overview of the ZooKeeper Kerberos security features can be found here.

NOTE: These security features are only available on DC/OS Enterprise 1.10 and later.

Provisioning a service account

This section describes how to configure DC/OS access for Kafka ZooKeeper. Depending on your security mode, Kafka ZooKeeper may require service authentication for access to DC/OS.

A service like Kafka ZooKeeper typically performs certain privileged actions on the cluster, which might require authenticating with the cluster. A service account associated with the service is used to authenticate with the DC/OS cluster. It is recommended to provisioning a separate service account for each service that would perform privileged operations. Service accounts authenticate using public-private keypair. The public key is used to create the service account in the cluster, while the corresponding private key is stored in the secret store. The service account and the service account secret are passed to the service as install time options.

Security mode Service Account
Disabled Not available
Permissive Optional
Strict Required

If you install a service in permissive mode and do not specify a service account, Metronome and Marathon will act as if requests made by this service are made by an account with the superuser permission.

Prerequisites:

Create a Key Pair

In this step, a 2048-bit RSA public-private key pair is created using the Enterprise DC/OS CLI.

Create a public-private key pair and save each value into a separate file within the current directory.

dcos security org service-accounts keypair <private-key>.pem <public-key>.pem

NOTE: You can use the DC/OS Secret Store to secure the key pair.

Create a Service Account

From a terminal prompt, create a new service account (for example, kafka-zookeeper) containing the public key (<your-public-key>.pem).

dcos security org service-accounts create -p <your-public-key>.pem -d <description> kafka-zookeeper

You can verify your new service account using the following command.

dcos security org service-accounts show kafka-zookeeper

Create a Secret

Create a secret (kafka-zookeeper/<secret-name>) with your service account and private key specified (<private-key>.pem).

NOTE: If you store your secret in a path that matches the service name, for example, service name and secret path are both kafka-zookeeper, then only the service named kafka-zookeeper can access it.

dcos security secrets create-sa-secret <private-key>.pem <service-account-id> kafka-zookeeper/<secret-name>

NOTE: If you are running, now EOL-ed, DC/OS 1.11 or older you would need to add --strict to the above command. For example, dcos security secrets --strict create-sa-secret .pem kafka-zookeeper/sa-secret .

You can list the secrets with this command:

dcos security secrets list /

Create and Assign Permissions

Use the following DC/OS CLI commands to rapidly provision the Kafka ZooKeeper service account with the required permissions.

  1. Create the permission.

IMPORTANT: The value to be used for <service-role> will be based on the service name, package version and DC/OS version. The table below shows a few examples of service names and the corresponding Mesos roles they would use. This version of kafka-zookeeper is quota support built in. To determine whether the service group, you are deploying a service to, has enforceRole set to true or false please check this KB article.

If you need help configuring the permissions for kafka-zookeeper, please feel to reach out to D2iQ support by filing a support ticket. Replace the instances of <service-role> with the correct name (<name>-role).

Service name <service-role>
DC/OS 1.13 or older
DC/OS 2.0 or newer AND enforceRole=false
<service-role>
DC/OS 2.0 or newer AND enforceRole=true
/kafka-zookeeper kafka-zookeeper-role kafka-zookeeper-role
/kafka-zookeeper-prod kafka-zookeeper-prod-role kafka-zookeeper-prod-role
/team01/kafka-zookeeper team01__kafka-zookeeper-role team01
/team01/prod/kafka-zookeeper team01__prod__kafka-zookeeper-role team01

Permissive

Run these commands with the service account name you created for the service in the Create a Service Account step above. For example we are using kafka-zookeeper

dcos security org users grant kafka-zookeeper dcos:mesos:master:framework:role:<service-role> create --description "Allow registering as a framework of role <service-role> with Mesos master"
dcos security org users grant kafka-zookeeper dcos:mesos:master:reservation:role:<service-role> create --description "Allow creating Mesos resource reservations of role <service-role>"
dcos security org users grant kafka-zookeeper dcos:mesos:master:volume:role:<service-role> create --description "Allow creating Mesos persistent volumes of role <service-role>"
dcos security org users grant kafka-zookeeper dcos:mesos:master:reservation:principal:kafka-zookeeper delete --description "Allow unreserving Mesos resource reservations with principal kafka-zookeeper"
dcos security org users grant kafka-zookeeper dcos:mesos:master:volume:principal:kafka-zookeeper delete --description "Allow deleting Mesos persistent volumes with principal kafka-zookeeper"

Strict

Run these commands with the service account name you created for the service in the Create a Service Account step above. For example we are using kafka-zookeeper

dcos security org users grant kafka-zookeeper dcos:mesos:master:task:user:nobody create --description "Allow running a task as linux user nobody"
dcos security org users grant kafka-zookeeper dcos:mesos:master:framework:role:<service-role> create --description "Allow registering as a framework of role <service-role> with Mesos master"
dcos security org users grant kafka-zookeeper dcos:mesos:master:reservation:role:<service-role> create --description "Allow creating Mesos resource reservations of role <service-role>"
dcos security org users grant kafka-zookeeper dcos:mesos:master:volume:role:<service-role> create --description "Allow creating Mesos persistent volumes of role <service-role>"
dcos security org users grant kafka-zookeeper dcos:mesos:master:reservation:principal:kafka-zookeeper delete --description "Allow unreserving Mesos resource reservations with principal kafka-zookeeper"
dcos security org users grant kafka-zookeeper dcos:mesos:master:volume:principal:kafka-zookeeper delete --description "Allow deleting Mesos persistent volumes with principal kafka-zookeeper"

Authentication

DC/OS Kafka ZooKeeper supports the Kerberos authentication mechanism.

Kerberos Authentication

Kerberos authentication relies on a central authority to verify the identity of ZooKeeper clients. DC/OS Kafka ZooKeeper integrates with your existing Kerberos infrastructure to verify the identity of clients.

Prerequisites

  • The hostname and port of a KDC reachable from your DC/OS cluster
  • Sufficient access to the KDC to create Kerberos principals
  • Sufficient access to the KDC to retrieve a keytab for the generated principals
  • The DC/OS Enterprise CLI
  • DC/OS Superuser permissions

Configure Kerberos Authentication

Create principals

The DC/OS Kafka ZooKeeper service requires a Kerberos principal for each server to be deployed. Each principal must be of the form

<service primary>/zookeeper-<server index>-server.<service subdomain>.autoip.dcos.thisdcos.directory@<service realm>

with:

  • service primary = service.security.kerberos.primary
  • server index = 0 up to node.count - 1
  • service subdomain = service.name with all/'s removed
  • service realm = service.security.kerberos.realm

For example, if installing with these options in addition to your own:

{
    "service": {
        "name": "a/good/example",
        "security": {
            "kerberos": {
                "primary": "example",
                "realm": "EXAMPLE"
            }
        }
    },
    "node": {
        "count": 3
    }
}

then the principals to create would be:

example/zookeeper-0-server.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
example/zookeeper-1-server.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
example/zookeeper-2-server.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE
Active Directory

Microsoft Active Directory can be used as a Kerberos KDC. Doing so requires creating a mapping between Active Directory users and Kerberos principals.

The utility ktpass can be used to both create a keytab from Active Directory and generate the mapping at the same time.

The mapping can, however, be created manually. For a Kerberos principal like <primary>/<host>@<REALM>, the Active Directory user should have its servicePrincipalName and userPrincipalName attributes set to,

servicePrincipalName = <primary>/<host>
userPrincipalName = <primary>/<host>@<REALM>

For example, with the Kerberos principal example&#x2F;zookeeper-0-server.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE, then the correct mapping would be,

servicePrincipalName = example&#x2F;zookeeper-0-server.agoodexample.autoip.dcos.thisdcos.directory
userPrincipalName = example&#x2F;zookeeper-0-server.agoodexample.autoip.dcos.thisdcos.directory@EXAMPLE

If either mapping is incorrect or not present, the service will fail to authenticate that Principal. The symptom in the Kerberos debug logs will be an error of the form

KRBError:
sTime is Wed Feb 07 03:22:47 UTC 2018 1517973767000
suSec is 697984
error code is 6
error Message is Client not found in Kerberos database
sname is krbtgt/AD.MESOSPHERE.COM@AD.MESOSPHERE.COM
msgType is 30

when the userPrincipalName is set incorrectly, and an error of the form

KRBError:
sTime is Wed Feb 07 03:44:57 UTC 2018 1517975097000
suSec is 128465
error code is 7
error Message is Server not found in Kerberos database
sname is kafka/kafka-1-broker.confluent-kafka.autoip.dcos.thisdcos.directory@AD.MESOSPHERE.COM
msgType is 30

when the servicePrincipalName is set incorrectly.

Place Service Keytab in DC/OS Secret Store

The DC/OS Kafka ZooKeeper service uses a keytab containing all node principals (service keytab). After creating the principals above, generate the service keytab making sure to include all the node principals. This will be stored as a secret in the DC/OS Secret Store.

NOTE: DC/OS 1.10 does not support adding binary secrets directly to the secret store, only text files are supported. Instead, first base64 encode the file, and save it to the secret store as /desired/path/__dcos_base64__secret_name. The DC/OS security modules will handle decoding the file when it is used by the service.

The service keytab should be stored at service/path/name/service.keytab. As noted above. for DC/OS 1.10, it would be __dcos_base64__service.keytab), where service/path/name matches the path and name of the service. For example, if installing with the options

{
    "service": {
        "name": "a/good/example"
    }
}

then the service keytab should be stored at a/good/example/service.keytab.

Documentation for adding a file to the secret store can be found here.

NOTE: Secrets access is controlled by DC/OS Spaces, which function like namespaces. Any secret in the same DC/OS Space as the service will be accessible by the service. However, matching the two paths is the most secure option. Additionally the secret name service.keytab is a convention and not a requirement.

Install the Service

Install the DC/OS Kafka ZooKeeper service with the following options in addition to your own:

{
    "service": {
        "security": {
            "kerberos": {
                "enabled": true,
                "kdc": {
                    "hostname": "<kdc host>",
                    "port": <kdc port>
                },
                "primary": "<service primary default zookeeper>",
                "realm": "<realm>",
                "keytab_secret": "<path to keytab secret>",
                "debug": <true|false default false>
            }
        }
    }
}

NOTE: It is possible to enable Kerberos after initial installation but the service may be unavailable during the transition. Additionally, your ZooKeeper clients will need to be reconfigured. For more information see the following section.

Enabling Kerberos After Deployment

It is possible to enable Kerberos authentication after the deployment of DC/OS Kafka ZooKeeper. As described in the (Rolling Upgrade)[https://cwiki.apache.org/confluence/display/ZOOKEEPER/Server-Server+mutual+authentication] section of the Apache ZooKeeper documentation, this requires multiple rolling restarts of the ZooKeeper ensemble and client connectivity may be lost at times.

Assuming that DC/OS Kafka ZooKeeper was initially deployed with service.security.kerberos.enabled set to false, the following steps can be used to enable Kerberos for the service.

{
    "service": {
        "security": {
            "kerberos": {
                "enabled": true,
                "kdc": {
                    "hostname": "<kdc host>",
                    "port": <kdc port>
                },
                "primary": "<service primary default zookeeper>",
                "realm": "<realm>",
                "keytab_secret": "<path to keytab secret>",
                "debug": <true|false default false>,
                "advanced": {
                    "required_for_quorum_learner": false,
                    "required_for_quorum_server": false,
                    "required_for_client": false
                }
            }
        }
    }
}

You can read more information from service.security.kerberos.advanced section.

Using this config file, update your DC/OS Kafka ZooKeeper service:

$ dcos kafka-zookeeper --name=<service name> update start --options=kerberos-toggle-step-1.json

and wait for the deploy (update) plan to complete:

$ dcos kafka-zookeeper --name=<service name> plan show deploy
deploy (serial strategy) (COMPLETE)
└─ node-update (serial strategy) (COMPLETE)
   ├─ zookeeper-0:[server, metrics] (COMPLETE)
   ├─ zookeeper-1:[server, metrics] (COMPLETE)
   └─ zookeeper-2:[server, metrics] (COMPLETE)

The service will now have deployed with Kerberos enabled, but with non-authenticated connections for leader election and from clients still allowed. In order to obtain a secure cluster, these unauthenticated connections should now be turned off to force secure connections.

Create a kerberos-toggle-step-2.json file with the following contents (note that it is only required to specify the options that change):

{
    "service": {
        "security": {
            "kerberos": {
                "advanced": {
                    "required_for_quorum_learner": true,
                    "required_for_quorum_server": false,
                    "required_for_client": false
                }
            }
        }
    }
}

and deploy this as a configuration update:

$ dcos kafka-zookeeper --name=<service name> update start --options=kerberos-toggle-step-2.json
$ dcos kafka-zookeeper --name=<service name> plan show deploy
deploy (serial strategy) (COMPLETE)
└─ node-update (serial strategy) (COMPLETE)
   ├─ zookeeper-0:[server, metrics] (COMPLETE)
   ├─ zookeeper-1:[server, metrics] (COMPLETE)
   └─ zookeeper-2:[server, metrics] (COMPLETE)

deploying a Kafka ZooKeeper instance that requires Kerberos authentication between learners in the leader election.

As the next step in the rolling update process, create a kerberos-toggle-step-3.json file with the following contents:

{
    "service": {
        "security": {
            "kerberos": {
                "advanced": {
                    "required_for_quorum_learner": true,
                    "required_for_quorum_server": true,
                    "required_for_client": false
                }
            }
        }
    }
}

and deploy this as a configuration update:

$ dcos kafka-zookeeper --name=<service name> update start --options=kerberos-toggle-step-3.json
$ dcos kafka-zookeeper --name=<service name> plan show deploy
deploy (serial strategy) (COMPLETE)
└─ node-update (serial strategy) (COMPLETE)
   ├─ zookeeper-0:[server, metrics] (COMPLETE)
   ├─ zookeeper-1:[server, metrics] (COMPLETE)
   └─ zookeeper-2:[server, metrics] (COMPLETE)

Kafka ZooKeeper will now require Kerberos authentication for the entire leader election process.

The final step is to require Kerberos authentication for clients connecting to the DC/OS Kafka ZooKeeper instance with an options file (say kerberos-toggle-step-4.json) as follows:

{
    "service": {
        "security": {
            "kerberos": {
                "advanced": {
                    "required_for_quorum_learner": true,
                    "required_for_quorum_server": true,
                    "required_for_client": true
                }
            }
        }
    }
}

which is deployed:

$ dcos kafka-zookeeper --name=<service name> update start --options=kerberos-toggle-step-3.json
$ dcos kafka-zookeeper --name=<service name> plan show deploy
deploy (serial strategy) (COMPLETE)
└─ node-update (serial strategy) (COMPLETE)
   ├─ zookeeper-0:[server, metrics] (COMPLETE)
   ├─ zookeeper-1:[server, metrics] (COMPLETE)
   └─ zookeeper-2:[server, metrics] (COMPLETE)

Now, only unauthenticated clients will be allowed to ping, create a session, close a session, or authenticate when communicating with the Kafka ZooKeeper instance.

NOTE: The default settings for service.security.kerberos.advanced.required_for_quorum_learner, service.security.kerberos.advanced.required_for_quorum_server, service.security.kerberos.advanced.required_for_client are all true.

Disabling Kerberos After Deployment

NOTE: Disabling Kerberos after deployment is not supported.

Securely Exposing DC/OS Kafka ZooKeeper Outside the Cluster.

Kerberos security is tightly coupled to the DNS hosts of the ZooKeeper tasks. Therefore, exposing a secure Kafka ZooKeeper service outside of the cluster requires additional setup.

Server to Client Connection

To expose a secure Kafka ZooKeeper service outside of the cluster, any client connecting to it must be able to access all tasks of the service via the IP address assigned to the task. This IP address will be one of the following:

  • An IP address on a virtual network
  • The IP address of the agent the task is running on.

Forwarding DNS and Custom Domain

Every DC/OS cluster has a unique cryptographic ID which can be used to forward DNS queries to that Cluster. To securely expose the service outside the cluster, external clients must have an upstream resolver configured to forward DNS queries to the DC/OS cluster of the service as described here.

With only forwarding configured, DNS entries within the DC/OS cluster will be resolvable at <task-domain>.autoip.dcos.<cryptographic-id>.dcos.directory. However, if you configure a DNS alias, you can use a custom domain. For example, <task-domain>.cluster-1.acmeco.net. In either case, the DC/OS Kafka ZooKeeper service will need to be installed with an additional security option:

{
    "service": {
        "security": {
            "custom_domain": "<custom-domain>"
        }
    }
}

where <custom-domain> is one of autoip.dcos.<cryptographic-id>.dcos.directory or your organization specific domain (e.g., cluster-1.acmeco.net).

As a concrete example, using the custom domain of cluster-1.acmeco.net the server 0 task would have a host of zookeeper-0-server.<service-name>.cluster-1.acmeco.net.

Kerberos Principal Changes

With a custom domain endpoint discovery will work as normal. Kerberos, however, does require slightly different configuration. As noted in the section Create Principals, the principals of the service depend on the hostname of the service. Use the correct domain when you create the Kerberos principals.

For example, if installing with the following settings:

{
    "service": {
        "name": "a/good/example",
        "security": {
            "kerberos": {
                "primary": "example",
                "realm": "EXAMPLE"
            }
        }
    },
    "node": {
        "count": 3
    }
}

then the principals will be as follows:

example/zookeeper-0-server.agoodexample.cluster-1.example.net@EXAMPLE
example/zookeeper-1-server.agoodexample.cluster-1.example.net@EXAMPLE
example/zookeeper-2-server.agoodexample.cluster-1.example.net@EXAMPLE