AVS reverse sync performance
I have a test environment with two 150GB discs, configured as a ZFS mirror (150GB usable space). I did a AVS configuration to replicate that discs to another server. I have lost the sync between the servers, and the state on the primary node was not sane.. so, i did a full resync from the secondary server.
Here is the time waste to make that full resynchronization (secondary -> primary).
# dsstat name t s pct role ckps dkps tps svt dev/rdsk/c2d0s0 P RS 99.89 net - Inf 0 -NaN dev/rdsk/c2d0s1 bmp 0 102 0 -NaN dev/rdsk/c3d0s0 P RS 99.89 net - Inf 0 -NaN dev/rdsk/c3d0s1 bmp 0 73 0 -NaN
# time /usr/sbin/sndradm -C local -g POOLNAME -n -w real 464m18.109s user 0m0.107s sys 0m0.075s
Almost 8hs…
# dsstat name t s pct role ckps dkps tps svt dev/rdsk/c2d0s0 P R 0.00 net - 0 0 0 dev/rdsk/c2d0s1 bmp 0 0 0 0 dev/rdsk/c3d0s0 P R 0.00 net - 0 0 0 dev/rdsk/c3d0s1 bmp 0 0 0 0
The interface is a Gigabit ethernet, and is the same interface used to access another services. I think a good approuch is have a dedicated interface to handle the sync/resync tasks. In this specific case, i don’t know if the time would change much, because the server was idle, and all the traffic was basically the AVS one…