Configuration

Configuration options for the DC/OS Apache HDFS service

The default DC/OS Apache HDFS installation provides reasonable defaults for trying out the service, but may not be sufficient for production use. You may require a different configuration depending on the context of the deployment.

Installing with Custom Configuration

The following are some examples of how to customize the installation of your Apache HDFS instance.

In each case, you would create a new Apache HDFS instance using the custom configuration as follows:

dcos package install hdfs --options=sample-hdfs.json

We recommend that you store your custom configuration in source control.

Installing multiple instances

By default, the Apache HDFS service is installed with a service name of hdfs. You may specify a different name using a custom service configuration as follows:

{
  "service": {
    "name": "hdfs-other"
  }
}

When the above JSON configuration is passed to the package install hdfs command via the --options argument, the new service will use the name specified in that JSON configuration:

dcos package install hdfs --options=hdfs-other.json

Multiple instances of Apache HDFS may be installed into your DC/OS cluster by customizing the name of each instance. For example, you might have one instance of Apache HDFS named hdfs-staging and another named hdfs-prod, each with its own custom configuration.

After specifying a custom name for your instance, it can be reached using dcos hdfs CLI commands or directly over HTTP as described below.

WARNING: The service name cannot be changed after initial install. Changing the service name would require installing a new instance of the service against the new name, then copying over any data as necessary to the new instance.

Installing into folders

In DC/OS 1.10 and later, services may be installed into folders by specifying a slash-delimited service name. For example:

{
  "service": {
    "name": "/foldered/path/to/hdfs"
  }
}

The above example will install the service under a path of foldered => path => to => hdfs. It can then be reached using dcos hdfs CLI commands or directly over HTTP as described below.

WARNING: The service folder location cannot be changed after initial install. Changing the folder location would require installing a new instance of the service against the new name, then copying over any data as necessary to the new instance.

Addressing named instances

After you’ve installed the service under a custom name or under a folder, it may be accessed from all dcos hdfs CLI commands using the --name argument. By default, the --name value defaults to the name of the package, or hdfs.

For example, if you had an instance named hdfs-dev, the following command would invoke a pod list command against it:

dcos hdfs --name=hdfs-dev pod list

The same query would be over HTTP as follows:

curl -H "Authorization:token=$auth_token" <dcos_url>/service/hdfs-dev/v1/pod

Likewise, if you had an instance in a folder like /foldered/path/to/hdfs, the following command would invoke a pod list command against it:

dcos hdfs --name=/foldered/path/to/hdfs pod list

Similarly, it could be queried directly over HTTP as follows:

curl -H "Authorization:token=$auth_token" <dcos_url>/service/foldered/path/to/hdfs-dev/v1/pod

You may add a -v (verbose) argument to any dcos hdfs command to see the underlying HTTP queries that are being made. This can be a useful tool to see where the CLI is getting its information. In practice, dcos hdfs commands are a thin wrapper around an HTTP interface provided by the DC/OS Apache HDFS Service itself.

Integration with DC/OS access controls

In Enterprise DC/OS, DC/OS access controls can be used to restrict access to your service. To give a non-superuser complete access to a service, grant them the following list of permissions:

dcos:adminrouter:service:marathon full
dcos:service:marathon:marathon:<service-name> full
dcos:service:adminrouter:<service-name> full
dcos:adminrouter:ops:mesos full
dcos:adminrouter:ops:slave full

Where <service-name> is your full service name, including the folder if it is installed in one.

Service Settings

Placement Constraints

Placement constraints allow you to customize where a service is deployed in the DC/OS cluster. Placement constraints use the Marathon operators syntax. For example, [["hostname", "UNIQUE"]] ensures that at most one pod instance is deployed per agent.

A common task is to specify a list of whitelisted systems to deploy to. To achieve this, use the following syntax for the placement constraint:

[["hostname", "LIKE", "10.0.0.159|10.0.1.202|10.0.3.3"]]

IMPORTANT: Be sure to include excess capacity in such a scenario so that if one of the whitelisted systems goes down, there is still enough capacity to repair your service.

Updating Placement Constraints

Clusters change, and as such so will your placement constraints. However, already running service pods will not be affected by changes in placement constraints. This is because altering a placement constraint might invalidate the current placement of a running pod, and the pod will not be relocated automatically as doing so is a destructive action. We recommend using the following procedure to update the placement constraints of a pod:

  • Update the placement constraint definition in the service.
  • For each affected pod, one at a time, perform a pod replace. This will (destructively) move the pod to be in accordance with the new placement constraints.

Zones Enterprise

Requires: DC/OS 1.11 Enterprise or later.

Placement constraints can be applied to DC/OS zones by referring to the @zone key. For example, one could spread pods across a minimum of three different zones by including this constraint:

[["@zone", "GROUP_BY", "3"]]

For the @zone constraint to be applied correctly, DC/OS must have Fault Domain Awareness enabled and configured.

WARNING: A service installed without a zone constraint cannot be updated to have one, and a service installed with a zone constraint may not have it removed.

Virtual networks

DC/OS Apache HDFS supports deployment on virtual networks on DC/OS (including the dcos overlay network), allowing each container (task) to have its own IP address and not use port resources on the agent machines. This can be specified by passing the following configuration during installation:

{
  "service": {
    "virtual_network_enabled": true
  }
}

NOTE: Once the service is deployed on a virtual network, it cannot be updated to use the host network.

User

By default, all pods’ containers will be started as system user “nobody”. If your system configured for using over system user (for instance, you may have externally mounted persistent volumes with root’s permissions), you can define the user by defining a custom value for the service’s property “user”, for example:

{
  "service": {
    "properties": {
      "user": "root"
    }
  }
}

Regions

The service parameter region can be used to deploy the service in an alternate region. By default the service is deployed in the “local” region, which is the region the DC/OS masters are running in. To install a service in a specific reason, include in its options:

{
  "service": {
    "region": "<region>"
  }
}

WARNING: A service may not be moved between regions.

Node Configuration

The node configuration objects correspond to the configuration for nodes in the HDFS cluster. Node configuration must be specified during installation and may be modified during configuration updates. All of the properties except disk and disk_type may be modified during the configuration update process.

A Note on Memory Configuration

As part of the configuration for each node type, the amount of memory in MB allocated to the node can be specified. This value must be larger than the specified maximum heap size for the given node type. Make sure to allocate enough space for additional memory used by the JVM and other overhead. A good rule of thumb is allocate twice as much memory as the size of the heap (set using either hdfs.hadoop_heapsize or <node type>.hadoop_<node type>node_opts).

A Note on Disk Types

As already noted, the disk size and type specifications cannot be modified after initial installation. Furthermore, the following disk volume types are available:

  • ROOT: Data is stored on the same volume as the agent work directory and the node tasks use the configured amount of disk space.
  • MOUNT: Data will be stored on a dedicated, operator-formatted volume attached to the agent. Dedicated MOUNT volumes have performance advantages and a disk error on these MOUNT volumes will be correctly reported to HDFS.

HDFS File System Configuration

The HDFS file system network configuration, permissions, and compression are configured via the hdfs JSON object. Once these properties are set at installation time they can not be reconfigured.

Operating System Configuration

In order for HDFS to function correctly, you must perform several important configuration modifications to the OS hosting the deployment. HDFS requires OS-level configuration settings typical of a production storage server.

File Setting Value Reason
/etc/sysctl.conf vm.swappiness 0 If the OS swaps out the HDFS processes, they can fail to respond to RPC requests, resulting in the process being marked DOWN by the cluster. This can be particularly troublesome for name nodes and journal nodes.
/etc/security/limits.conf nofile unlimited If this value is too low, a job that operate on the HDFS cluster may fail due to too may open file handles.
/etc/security/limits.conf, /etc/security/limits.d/90-nproc.conf nproc 32768 An HDFS node spawns many threads, which go towards kernel nproc count. If nproc is not set appropriately, the node will be killed.

Using Volume Profiles

Volume profiles are used to classify volumes. For example, users can group SSDs into a “fast” profile and group HDDs into a “slow” profile.

NOTE: Volume profiles are immutable and therefore cannot contain references to specific devices, nodes or other ephemeral identifiers.

DC/OS Storage Service (DSS) is a service that manages volumes, volume profiles, volume providers, and storage devices in a DC/OS cluster.

Once the DC/OS cluster is running and volume profiles are created, you can deploy Hdfs with the following configs:

cat > hdfs-options.json <<EOF
{
    "journal_node": {
        "volume_profile": "hdfs",
        "disk_type": "MOUNT"
    },
    "name_node": {
        "volume_profile": "hdfs",
        "disk_type": "MOUNT"
    },
    "data_node": {
        "volume_profile": "hdfs",
        "disk_type": "MOUNT"
    }
}
EOF
dcos package install hdfs --options=hdfs-options.json

NOTE: Hdfs will be configured to look for MOUNT volumes with the hdfs profile.

Once the Hdfs service finishes deploying its tasks will be running with the specified volume profiles.

dcos hdfs update status
deploy (serial strategy) (COMPLETE)
├─ journal (serial strategy) (COMPLETE)
│  ├─ journal-0:[node] (COMPLETE)
│  ├─ journal-1:[node] (COMPLETE)
│  └─ journal-2:[node] (COMPLETE)
├─ name (serial strategy) (COMPLETE)
│  ├─ name-0:[node, zkfc] (COMPLETE)
│  └─ name-1:[node, zkfc] (COMPLETE)
└─ data (serial strategy) (COMPLETE)
   ├─ data-0:[node] (COMPLETE)
   ├─ data-1:[node] (COMPLETE)
   └─ data-2:[node] (COMPLETE)