dcos storage provider

ENTERPRISE

Manage volume providers.

dcos storage provider

Manage volume providers.

Synopsis

A volume provider manages storage capacity in the cluster. It corresponds to a single instance of a CSI plugin.

The Container Storage Interface (CSI) is a standard through which different storage technologies can expose storage capacity to a cluster. For example, a LVM volume group on a specific agent represents a pool of storage capacity in the cluster. To expose that LVM volume group to the workloads running on the cluster, you need to configure a volume provider.

There can be multiple volume providers. Two volume providers may have the same type. For example, if you want to configure two LVM volume groups on an agent, you will create two volume providers on that agent.

The DC/OS Storage Service (DSS) supports different types of volume providers. The type of a volume provider is determined by its DSS volume plugin. Every volume provider specifies a single volume plugin in its configuration. There are different DSS volume plugins, and each one corresponds to a different CSI plugin that exposes storage capacity to the cluster.

The DSS currently supports local volume providers. Local volume providers offer storage resources that are necessarily bound to individual nodes (e.g., individual devices, LVM volume groups, etc.), as opposed to storage resources that may be detached/reattached to different nodes (e.g., Amazon EBS volumes).

When a local volume provider is created on an agent, the DC/OS Storage Service will launch an instance of the associated CSI plugin on that agent. The DSS volume plugin will ensure that it is properly configured and running with the correct environment variables and command-line flags.

For a list of DSS volume plugins, have a look at the “Volume plugins” section in the documentation: </mesosphere/dcos/services/storage/>

Let’s imagine Dan has a DC/OS cluster on which he would like to run workloads that use persistent storage. Dan’s cluster consists of masters, agents and public agents. On each agent there are several storage devices. For this example we’ll assume that every agent has the same device composition, but nothing prevents Dan from having a different number of devices on each agent.

Every agent has the following devices:

  • /dev/xvda: the OS is installed on this device
  • /dev/xvdb, /dev/xvdc: HDDs intended for archiving data
  • /dev/xvdd, /dev/xvde: SSDs optimized for read performance
  • /dev/xvdf, /dev/xvdg: SSDs optimized for write performance

Dan wants to group the different storage devices into separate LVM volume groups and then expose those LVM volume groups to the DC/OS Storage Service, so their storage capacity becomes available in his cluster.

He creates three separate volume providers. Each volume provider has the plugin field set to lvm. The “lvm” volume plugin documentation specifies that the devices field is required and must list the devices that shall form the LVM volume group exposed through the new volume provider. He also adds labels to each volume provider to help him describe them in generic terms.

# First, he lists the storage devices available in his cluster.
dcos storage device list
NODE                                     NAME   STATUS  ROTATIONAL  TYPE  FSTYPE  MOUNTPOINT
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvda   ONLINE  false       disk  -       -
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvda1  ONLINE  false       part  xfs     /
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvdb   ONLINE  true        disk  -       -
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvdc   ONLINE  true        disk  -       -
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvdd   ONLINE  false       disk  -       -
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvde   ONLINE  false       disk  -       -
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvdf   ONLINE  false       disk  -       -
c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1  xvdg   ONLINE  false       disk  -       -
...
# We're working with a single node so let's reuse its node ID.
export NODE_ID=c67efa5d-34fa-4bc5-8b21-2a5e0bd52385-S1

# Create a 'lvm' volume provider that groups /dev/xvdb and /dev/xvdc into a
# LVM volume group and exposes that VG's storage capacity to DC/OS.
cat <<EOF | dcos storage provider create
{
    "name": "hdd-${NODE_ID}",
    "description": "high-latency archival devices",
    "spec": {
        "plugin": {
            "name": "lvm",
            "config-version": "latest"
        },
        "node": "${NODE_ID}",
        "plugin-configuration": {
            "devices": ["xvdb", "xvdc"]
        },
        "labels": {
            "latency": "high"
        }
    }
}
EOF
# Create a 'lvm' volume provider that groups /dev/xvdd and /dev/xvde into a
# LVM volume group and exposes that VG's storage capacity to DC/OS. We label
# this provider as "read-optimized", as it consists of devices that we know are
# optimized for reading and we'd like to target them for certain workloads.
cat <<EOF | dcos storage provider create
{
    "name": "ssd-ro-${NODE_ID}",
    "description": "low-latency read-optimized devices",
    "spec": {
        "plugin": {
            "name": "lvm",
            "config-version": "latest"
        },
        "node": "${NODE_ID}",
        "plugin-configuration": {
            "devices": ["xvdd", "xvde"]
        },
        "labels": {
          "latency": "low",
          "read-optimized": "true"
        }
    }
}
EOF
# Create a 'lvm' volume provider that groups /dev/xvdf and /dev/xvdg into a
# LVM volume group and exposes that VG's storage capacity to DC/OS. We label
# this provider as "write-optimized", as it consists of devices that we know are
# optimized for writing and we'd like to target them for certain workloads.
cat <<EOF | dcos storage provider create
{
    "name": "ssd-wo-${NODE_ID}",
    "description": "low-latency write-optimized devices",
    "spec": {
        "plugin": {
            "name": "lvm",
            "config-version": "latest"
        },
        "node": "${NODE_ID}",
        "plugin-configuration": {
            "devices": ["xvdf", "xvdg"]
        },
        "labels": {
          "latency": "low",
          "write-optimized": "true"
        }
    }
}
EOF

Dan now has three volume providers: hdd-[NODE_ID], ssd-ro-[NODE_ID] and ssd-wo-[NODE_ID]. In order to create volumes for his workloads, he now needs to create volume profiles. For more information on volume profiles, run dcos storage profile --help.

dcos storage provider [flags]

Options inherited from parent commands

Name Description
-h,--help Help for this command.
--timeout duration Override the default operation timeout. (default 55s)
-v,--verbose Verbose mode.