4

I am having trouble configuring a statically provisioned EFS such that multiple pods, which run as a non-root user, can read and write the file system.

I am using the AWS EFS CSI Driver. My version info is as follows:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.18", GitCommit:"6f6ce59dc8fefde25a3ba0ef0047f4ec6662ef24", GitTreeState:"clean", BuildDate:"2021-04-15T03:31:30Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:53:22Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

I followed the example from the github repo (https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods) updating the volumeHandle appropriately. The busybox containers defined in the specs for the example are able to read and write the file system, but when I add the same PVC to a pod which does not run as the root user the pod is unable to write to the mounted EFS. I have tried a couple other things to get this working as I expected it to:

None of these configurations allowed a non-root user to write to the mounted EFS. What am I missing in terms of configuring a statically provisioned EFS so that multiple pods, all of which run as a non-root user, can read and write in the mounted EFS?

For reference here are the pod definitions:

apiVersion: v1
kind: Pod
metadata:
  name: app1
spec:
  containers:
  - name: app1
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out1.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
---
apiVersion: v1
kind: Pod
metadata:
  name: app2
spec:
  containers:
  - name: app2
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out2.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim
---
apiVersion: v1
kind: Pod
metadata:
  name: app3
spec:
  containers:
  - name: app3
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out3.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  securityContext:
    runAsUser: 1000
    runAsGroup: 1337
    fsGroup: 1337
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

And the SC/PVC/PV:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi  
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
  annotations:
    pv.beta.kubernetes.io/gid: {{ .Values.groupId | quote }}
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-asdf123
R. Arctor
  • 151
  • 1
  • 6

2 Answers2

1

If anyone comes across this later I resolved my issue by using an initContainer for any Pod that needed to write to the file system.

For example:

apiVersion: v1
kind: Pod
metadata:
  name: app3
spec:
  containers:
  - name: app3
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out3.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
    securityContext:
      runAsGroup: 1337
  initContainers:
  - name: fs-owner-change
    image: busybox
    command:
    - chown
    - "root:1337"
    - "/efs-fs"
    volumeMounts:
    - mountPath: /efs-fs
      name: persistent-storage
  securityContext:
    fsGroup: 1337
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

The rest of the definitions match what was in my question.


Working sort of off what @JerryChen suggested ("use access points") I discovered that things were simpler if I just used a dynamically provisioned EFS which does utilize EFS access points to allow shared access to an EFS instance. Below is the StorageClass, PersistentVolumeClaim, and an example Pod.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
parameters:
  provisioningMode: efs-ap
  fileSystemId:  {{ .Values.efsVolumeHandle }}
  directoryPerms: "775"
reclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi  # Not actually used - see https://aws.amazon.com/blogs/containers/introducing-efs-csi-dynamic-provisioning/
---
apiVersion: v1
kind: Pod
metadata:
  name: app3
spec:
  containers:
  - name: app3
    image: busybox
    command: ["/bin/sh"]
    args: ["-c", "while true; do echo $(date -u) >> /data/out3.txt; sleep 5; done"]
    volumeMounts:
    - name: persistent-storage
      mountPath: /data
  securityContext:
    runAsUser: 1000
    runAsGroup: 1337
    fsGroup: 1337
  volumes:
  - name: persistent-storage
    persistentVolumeClaim:
      claimName: efs-claim

Note the directoryPerms (775) specified in the StorageClass, as well as the runAsGroup and fsGroup specified in the Pod. When utilizing this PVC in a Pod that runs as a non-root user shared a user group number is the key.

runAsUser was only used to ensure the busybox command was executed by a non-root user.

This is likely leagues better than brute-forcing the file system permissions using an initContainer

R. Arctor
  • 151
  • 1
  • 6
1

You should try the access point in EFS, https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html

It enforces an operating system user and group, and a directory for every file system request made through the access point.

In other words, you can set the path as uid 1337 and gid 1337.

Jerry Chen
  • 146
  • 2