So, disable or not?
Actually, i have some corrections to be made. When i did see the numbers, i was stunned and that blocked me to think…
I did the Linux tests in a filesystem that i have used to make these tests. So, it was async (on EMC discs), and here is the result (time tar -xpvzf pam_make-1.2-MRSL.tar.gz):
real 0m0.530s user 0m0.070s sys 0m0.110s
The same command in Solaris with zil_disable, running on local SATA discs (mirror):
real 0m0.531s user 0m0.050s sys 0m0.050s
Ok, ok… on EMC (it’s pretty the same, it’s just memory):
real 0m0.525s user 0m0.070s sys 0m0.060s
So, i did change the Linux for the default configuration (sync), and here is the result (EMC discs):
real 0m1.651s user 0m0.070s sys 0m0.090s
and solaris without disable zil (EMC too):
real 0m1.402s user 0m0.040s sys 0m0.070s
Running that command without disable zil on local SATA discs(mirror) was the problem:
real 0m7.959s user 0m0.060s sys 0m0.080s
UPDATED
With AVS software (the pool in the secondary node/logging mode), Solaris ZFS/NFS without disable zil (SATA mirrored discs):
real 0m14.566s user 0m0.040s sys 0m0.100s
and disabling zil:
real 0m0.527s user 0m0.030s sys 0m0.070s
With AVS software (in replication mode), Solaris ZFS/NFS without disable zil (SATA mirrored discs):
real 0m30.790s user 0m0.070s sys 0m0.050s
That’s all.
why not test with a set nocacheflush=1 in your /etc/system file?
It will have a good performance when the zfs filesystem as nfs server.
I did such test, and the performance was really good! Take a look:
real 0m0.776s
user 0m0.040s
sys 0m0.060s
But i have two questions:
1) In the tests that i have made (you can look them here: https://www.eall.com.br/blog/?p=89), that tuning did not make much diff. Maybe the iozone’s tests did not explore that point.
2) How “nocacheflush” change the fact that Solaris to provide a proper NFS service, has to SYNC the data to discs (on a COMMIT or sync client request)?