Mounting XFS on GKE

tl;dr Needed to setup xfs filesystem on GKE. Had to fall back to using container-vm OS image for installing packages. Used Kubernetes' daemonset to run node "init scripts" to install xfsprogs.

We had a mongo replicaset cluster setup in Google Container Engine. We wanted to switch to using the xfs filesystem as per recommendation for mongo. Here's where the complications start.

  1. The default image on GKE is gci, based on ChromiumOS, which does not have a straightforward method for installing packages for xfs.
  2. container-vm does have xfsprogs available, but installed by default and custom images are not supported on GKE.
  3. There's no standard/recommended method for running node init scripts.

Our only option was container-vm for OS image so that we could just apt-get install xfsprogs on the node, preferably at init time.

Initially, I was headed in the direction of editing the Instance Templates the GKE was using and adding in apt-get install xfsprogs, but the problem was that it was not portable enough. If I added a new node-pool, the changes to the Instance Template would not get carried over.

The solution turned out to be to use Kubernetes daemonsets. daemonsets run on every node, including newly created nodes. The recommendation even is briefly described on the Kubernetes docs and an example on Kubernetes/contrib.

Putting it together
  1. Create a node-pool using container-vm
  2. Create a startup-script daemonset based on Kubernetes' startup-script example.

    - name: STARTUP_SCRIPT
      value: |
        #! /bin/bash
        set -o errexit
        set -o pipefail
        set -o nounset
        apt-get update || true
        apt-get install -y xfsprogs
    
  3. Apply the startup script daemonset
  4. Create an empty disk from GCE.
  5. fsType: xfs on the PersistentVolume config.
  6. Apply the Kubernetes config and let Kubernetes format the disk to xfs and mount.