Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Welcome to Software Development on Codidact!

Will you help us build our independent community of developers helping developers? We're small and trying to grow. We welcome questions about all aspects of software development, from design to code to QA and more. Got questions? Got answers? Got code you'd like someone to review? Please join us.

Post History

50%
+0 −0
Q&A Unable to `mount` overlayfs in Docker container when inside a LXC with a ZFS pool

Summary The TL;DR is that, as long as ZFS is being used as the underlying file system, mount commands on top of that will not work. It's simply not supported. I was also able to confirm this over ...

posted 4y ago by ghost-in-the-zsh‭

Answer
#1: Initial revision by user avatar ghost-in-the-zsh‭ · 2020-11-17T00:18:36Z (about 4 years ago)
# Summary

The **TL;DR** is that, as long as ZFS is being used as the underlying file system, `mount` commands on top of that will ***not*** work. It's simply not supported. I was also able to confirm this over email with Ubuntu/LXD developer, Stéphane Graber, with him saying, in part, that:

> overlay doesn't work on top of ZFS. This isn't a permission or a container issue, it's a filesystem issue.

I've outlined two possible solutions below in more detail. Which one works for you may depend on your setup, as was my case, so read carefully.

# Jenkins Workaround (My Case)

This "solution" is really more of a workaround than anything else. **TL;DR:** Don't let jobs that need overlays get scheduled in nodes that live within a ZFS pool.

My server setup boils down to a RAIDed NVMe OS installation drive and a set of larger data drives. The OS drive lacks the capacity for the work that needs to be done and *all* of the data drives in this setup are VDEVs in the ZFS Pool. This means that there's no other place where the more ideal case of using LXD to add a new non-ZFS pool (see later) could be implemented. (In fact, LXD itself lives within the ZFS pool I had set up.)

Therefore, the workaround here was to re-arrange the Jenkins *labels* for both the jobs and the nodes. (If you're not familiar with Jenkins, it relies on these labels to determine which nodes can service which jobs/tasks.) The labels are arranged in such a way that Jenkins will *never* schedule a job that requires `mount`ing an overlay to a LXC-based node that cannot support it.

As the system administrator, you should know how your nodes are set up. The process of re-labeling jobs and nodes is done manually in the node/job configuration pages from Jenkins itself. In this case, you simply make sure that nodes hosted within a ZFS pool never have labels that match those of jobs that need overlays.

Note that your Docker containers must still be launched with the `--privileged` option for the `mount` commands to work. This is independent from the ZFS-specific issue described.

# Adding Non-ZFS Pools (Ideal)

Note that this solution ***assumes*** you have extra drives and/or locations that are *outside* of the existing ZFS pool. Also note that, as I had said before, I was *not able to confirm this myself* due to my particular setup. Make sure you understand these steps before trying to apply them, as **you *do* run the risk of destroying your own data**, or at least accidentally "hiding" it, if you fail to understand what you're doing.

The **TL;DR:** is from Stéphane Graber:

> Your only way out of this is to have you /var/lib/docker not be on ZFS. You can do that either by completely changing your storage pool to something else or by creating a second pool, allocate a dedicated volume from that and attach it to /var/lib/docker inside your container.
> 
> That's effectively the setup we made for Travis-CI where the instance itself is on a ZFS pool but /var/lib/docker comes from a directory (ext4) pool so overlayfs is happy.

This would've been a more ideal solution, but I don't have the ability to implement this in my setup. I'm including it here for the sake of completeness, but ***this is untested by me*** and you may need to do more work to properly adapt this solution to your setup.

Note that, in my case, it is *Jenkins* that's being used to run the jobs and Docker bind-mounts some directories from the host into itself. Therefore, paths in later steps will focus on `/var/lib/jenkins` instead of `/var/lib/docker`.

First, understand that you ***cannot*** trust the `/etc/fstab` info from inside the LXC. For example, my LXCs says:

```
$ cat /etc/fstab
LABEL=cloudimg-rootfs   /        ext4   defaults        0 0
```

While it clearly claims to be `ext4`, this `ext4` filesystem may still be (and in my case, actually was) on top of ZFS. Therefore, `fstab` data should be ignored. Stéphane mentions you should rely on `/proc/mounts`:

> Your container is on ZFS, not on ext4, ignore /etc/fstab and look at /proc/mounts instead

In this case, you need to use LXD/LXC to set up a pool that is completely independent/separate from the pre-existing ZFS pool. Then create volumes there and attach them to your LXCs. (This likely means you'll need extra drives, b/c ZFS likes to consume whole drives when adding them to a pool. As an example, if you're using ZFS on Linux, and your installation drive is using an `ext4` file system, then you will *not* be able to include this drive as part of the `zfs` pool.)

The steps below are what I used *prior* to remembering that the extra non-ZFS pool was still on top of the existing ZFS pool, so, had it not been for that detail, this *should've* worked. Note that these steps *assume* a pre-existing Jenkins installation that you want to preserve. Otherwise, you can remove steps as needed.

After marking your Jenkins node offline and SSH'ing into it, move the Jenkins home directory to a backup location and create a new empty directory for it:

```
mv /var/lib/jenkins /var/lib/jenkins.old
mkdir /var/lib/jenkins
```

From a separate shell, SSH into the system *hosting* the LXCs. I chose to stop my LXC, but this may not be required:

```
$ lxc stop jenkins-node-01
```

Then create a non-ZFS storage pool and a storage volume inside of it. Here, the pool's name is `jenkins` and its driver is `dir`:

```
$ lxc storage create jenkins dir
$ lxc storage list              
+----------+-------------+--------+------------------------------------------------+---------+
|   NAME   | DESCRIPTION | DRIVER |                     SOURCE                     | USED BY |
+----------+-------------+--------+------------------------------------------------+---------+
| jenkins  |             | dir    | /var/snap/lxd/common/lxd/storage-pools/jenkins | 0       |
+----------+-------------+--------+------------------------------------------------+---------+
| lxd-pool |             | zfs    | tank/lxc                                       | 7       |
+----------+-------------+--------+------------------------------------------------+---------+
```

It's up to you to make sure that your LXD installation is *not* hosted within the ZFS pool in question, like in my setup. Otherwise, this is where you'd be going back to square one.

Then create the node-specific volumes inside of it. This is what I ran for my #1 node:

```
$ lxc storage volume create jenkins jenkins-node-01
$ lxc storage list                                 
+----------+-------------+--------+------------------------------------------------+---------+
|   NAME   | DESCRIPTION | DRIVER |                     SOURCE                     | USED BY |
+----------+-------------+--------+------------------------------------------------+---------+
| jenkins  |             | dir    | /var/snap/lxd/common/lxd/storage-pools/jenkins | 1       |
+----------+-------------+--------+------------------------------------------------+---------+
| lxd-pool |             | zfs    | tank/lxc                                       | 7       |
+----------+-------------+--------+------------------------------------------------+---------+
```

Note that, after creating the new `jenkins-node-01` volume inside the `jenkins` pool, it now shows the pool is hosting 1 volume. Attach the volume to your node in the correct path:

```
$ lxc storage volume attach jenkins jenkins-node-01 jenkins-node-01 jenkins /var/lib/jenkins
```

Note that both the volume and the node are named `jenkins-node-01`. This is not an error. To confirm the volume is attached, you can use the `lxc config show <node-name>` and the volume should show up under the `devices` section. (If you decided to skip the "backup" step, you will be mounting the volume on top of the pre-existing directory and when you go back to your LXC, the directory *will be empty* because the volume on top of it is empty. Your data has *not* been destroyed; only hidden "under" the volume you just mounted. Just detach the volume and don't skip the "backup" step.)

If you had stopped your LXC, you may now run `lxc start <your-node>` and SSH back into it. From within the LXC node, change the file ownership back to the `jenkins` account (or whatever account you used) and then copy over the data from the prior backup location into it:

```
$ chown -R jenkins:jenkins /var/lib/jenkins
$ cp -r /var/lib/jenkins.old/ /var/lib/jenkins/
```

Your node *should* now be ready for use and can be brought back online from the Jenkins admin GUI. After you've verified that everything is working as expected, you should be able to remove the `/var/lib/jenkins.old/` backup directory.

Be aware that, from now on, if this volume gets destroyed, your data goes with it. If you have backup processes, such as those that use `lxc export ...`, you may need to modify your process as the `export` command only includes the containers, *not* their volumes as you might assume.