Discussion:
unionfs and nullfs combination
(too old to reply)
Nikos Vassiliadis
2017-02-19 10:51:56 UTC
Permalink
Raw Message
Hi,

One relatively cheap way to create thin jails in the pre-ZFS era,
was to combine nullfs and unionfs (1). This seem to work only in
10 and previous branches. Do you use such a combination?

It seems like a very relevant feature nowadays, when people
use all these cloud-based systems, which oftentimes have little
resources to run ZFS and UFS is most likely a better choice...
https://rsmith.home.xs4all.nl/freebsd/using-nullfs-and-unionfs-for-the-ports-tree-in-a-jail.html
Thanks in advance for your thoughts,
Nikos
Kurt Jaeger
2017-02-19 11:56:33 UTC
Permalink
Raw Message
Hi!
Post by Nikos Vassiliadis
One relatively cheap way to create thin jails in the pre-ZFS era,
was to combine nullfs and unionfs (1). This seem to work only in
10 and previous branches. Do you use such a combination?
We had this running with FreeBSD 6.x, but unionfs had issues,
among them the whiteout problem.

If you have a directory where many small files with random
names are created in the upper layer, and deleted afterwards,
the directory in the upper layer grows with each file because
of the way whiteout files are handled. There's a mount option
whiteout=whenneeded that should fix this, I no longer remember
what stopped us from using it.
Post by Nikos Vassiliadis
It seems like a very relevant feature nowadays, when people
use all these cloud-based systems, which oftentimes have little
resources to run ZFS and UFS is most likely a better choice...
Funny, I have the impression that disk space, RAM and CPU are
plenty compared to the past, so I would prefer ZFS anytime now.
Our next jail box will probably use ZFS dedup with lots of RAM.
--
***@opsec.eu +49 171 3101372 3 years to go !
Nikos Vassiliadis
2017-02-19 14:20:49 UTC
Permalink
Raw Message
Hi Kurt,
Post by Kurt Jaeger
We had this running with FreeBSD 6.x, but unionfs had issues,
among them the whiteout problem.
I am not sure exactly when unionfs was re-written...

Sometime during 7-CURRENT but I am not sure if everything was
backported to 6.
Post by Kurt Jaeger
Post by Nikos Vassiliadis
It seems like a very relevant feature nowadays, when people
use all these cloud-based systems, which oftentimes have little
resources to run ZFS and UFS is most likely a better choice...
Funny, I have the impression that disk space, RAM and CPU are
plenty compared to the past, so I would prefer ZFS anytime now.
Our next jail box will probably use ZFS dedup with lots of RAM.
Very true. And I love ZFS.

I am talking about cloud installations where VM are created and
destroyed in a fast pace and are mostly small like 1GB of RAM.
UFS is very relevant for such installations. And being able to
be cloud-friendly is a good thing:)

Nikos

Kurt Jaeger
2017-02-19 11:59:30 UTC
Permalink
Raw Message
Hi!
Post by Nikos Vassiliadis
One relatively cheap way to create thin jails in the pre-ZFS era,
was to combine nullfs and unionfs (1). This seem to work only in
10 and previous branches. Do you use such a combination?
Ah, to correct myself here: We only used unionfs, not in combination
with nullfs. Can you describe why nullfs with unionfs does not
work in 11 ?
--
***@opsec.eu +49 171 3101372 3 years to go !
Nikos Vassiliadis
2017-02-19 14:21:19 UTC
Permalink
Raw Message
Hi Kurt,
Post by Kurt Jaeger
Ah, to correct myself here: We only used unionfs, not in combination
with nullfs. Can you describe why nullfs with unionfs does not
work in 11 ?
It panics easily. I use the following shell script to create a working
Post by Kurt Jaeger
PREFIX=/jails
BASEJAIL=${PREFIX}/base-jail
JAILS="mongo-1 mongo-2 mongo-3 mongo-4 mongo-5 mongo-6"
mkdir -p $BASEJAIL
for jail in $JAILS
do
mkdir -p ${PREFIX}/$jail
mkdir -p ${PREFIX}/upper/$jail
mount -t nullfs -o ro $BASEJAIL ${PREFIX}/$jail
mount -t unionfs -o noatime ${PREFIX}/upper/$jail ${PREFIX}/$jail
# mount -t devfs none ${PREFIX}/$jail/dev
# cp /etc/resolv.conf ${PREFIX}/$jail/etc/resolv.conf
done
#chroot $PREFIX/mongo-1 rm -rv /var
#chroot $PREFIX/mongo-2 rm -rv /var
Then I can trigger a panic if I run this:
rm -rf /jails/mongo-*/*
Post by Kurt Jaeger
cpuid = 0
db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe012058ce90
vpanic() at vpanic+0x186/frame 0xfffffe012058cf10
panic() at panic+0x43/frame 0xfffffe012058cf70
trash_ctor() at trash_ctor+0x4b/frame 0xfffffe012058cf80
uma_zalloc_arg() at uma_zalloc_arg+0x514/frame 0xfffffe012058cfe0
unionfs_relookup() at unionfs_relookup+0x41/frame 0xfffffe012058d040
unionfs_mkshadowdir() at unionfs_mkshadowdir+0x120/frame 0xfffffe012058d270
unionfs_lookup() at unionfs_lookup+0x883/frame 0xfffffe012058d3c0
VOP_CACHEDLOOKUP_APV() at VOP_CACHEDLOOKUP_APV+0xda/frame 0xfffffe012058d3f0
vfs_cache_lookup() at vfs_cache_lookup+0xd6/frame 0xfffffe012058d450
VOP_LOOKUP_APV() at VOP_LOOKUP_APV+0xda/frame 0xfffffe012058d480
lookup() at lookup+0x6d2/frame 0xfffffe012058d520
namei() at namei+0x504/frame 0xfffffe012058d5e0
kern_statat() at kern_statat+0x98/frame 0xfffffe012058d790
sys_fstatat() at sys_fstatat+0x2c/frame 0xfffffe012058d830
amd64_syscall() at amd64_syscall+0x2f9/frame 0xfffffe012058d9b0
Xfast_syscall() at Xfast_syscall+0xfb/frame 0xfffffe012058d9b0
--- syscall (493, FreeBSD ELF64, sys_fstatat), rip = 0x8008ba62a, rsp = 0x7fffffffe728, rbp = 0x7fffffffe7e0 ---
KDB: enter: panic
Thanks in advance for any ideas,
Nikos
Loading...