ZFS replication using ZRepl

Till yesterday I used a simple USB disk as a ZFS backup pool. I had a moderately complex script started by cron at 6’o clok every morning to create a new snapshot about some vital information storing datasets and then zfs send and zfs recv it to the USB disk.
It wasn’t too elegant and it made me always a bit nervous knowing how volitaile USB disks are. My 2 cents: do not use USB disks for ZFS, but If you have to, then at least sure you have some proper redundancy with non-USB disks.
Nevertheless, this was the past, because I have just set up a OmniOS VM in the cloud for backup purposes.
It was a real adventure as the service provider doesn’t allows to upload a custom iso via the admin webui, but you can request it and they attach the specified iso to the VM. After rebooting the VM the machine booted into the … err … something unknown anti-spam appliance. It seems they messed up something, but then finally my VM booted into OmniOS. After a quick setup my instance was ready to receive the backups. I have already heard about ZRepl before, but I haven’t had any real exposure so I have decided to give it a try. There is a OmniOS guide, which was a big motivation, but at the end of the day, I did a custom install, which could be interesting for you aswell:
First: you have to deciede:
- which datasets needs to be backed up
- which child-dataset should be excluded (if any)
- how often should it backed up
- how long should the backup stored
- etc.
I got 20gigs of space on my VM, which is plenty, compared to the 8 gig USB drive I used so far.
To make my future life easier, I have created a file based zpool for easier migration. I got only one virtual disk, so I had no other choice. AFAIK: You shouldn’t store any sensitive date on rpool.
So I have followed the OmniOS ZRepl guide till the service configuration, but I have decided to go with ssh based authentication. This is not recommended, as this results in a lower bandwith, but this is also much easier to set up. Basically the client calls into the backup server and pushes their data via SSH. I only needed to set up some SSH keys and add them to the authorized_key on the VM side. Easy! (Comapred to rolling an own CA, for sure!)
As input I have alsu used this blogpost
Lets see how I did it:
Backup Server:
root@omnibackup:~# cat /etc/opt/ooce/zrepl/zrepl.yml
# zrepl main configuration file.
# For documentation, refer to https://zrepl.github.io/
#
global:
logging:
- type: "stdout"
level: "debug"
format: "human"
jobs:
- name: sink
type: sink
serve:
type: stdinserver
client_identities:
- "client_nas"
root_fs: "storage/zrepl/sink"
recv:
placeholder:
encryption: off # See https://zrepl.github.io/configuration/sendrecvoptions.html#placeholders
Client:
root@omnios:~# cat /etc/opt/ooce/zrepl/zrepl.yml
global:
logging:
- type: "stdout"
level: "debug"
format: "human"
jobs:
- name: client_to_master
type: push
connect:
host: "test.extrowerk.com"
identity_file: /root/.ssh/backup-id_rsa
port: 22
type: ssh+stdinserver
user: root
filesystems: {
"tank/storage/vault": true,
}
send:
encrypted: true
snapshotting:
type: periodic
prefix: zrepl_
interval: 10m
pruning:
keep_sender:
- type: not_replicated
- type: last_n
count: 10
keep_receiver:
- type: grid
grid: 1x1h(keep=all) | 24x1h | 30x1d | 6x30d
regex: "^zrepl_"
I had to set up a new RSA based SSH key, add the public part to the backup-server authorized_keys file (for the root user!). I have tried to use Ed25519 based keys, but zrepl didn’t liked it for some reason. In the mantime I have created a PR at omnios-extra for the zrepl-0.7.0, hopefully the issue have been already fixed in this version.
There was also a question about the encrypted datasets. In this specific case I have decided to not-allow to the backup-VM to decrypt the data, so I had to add the following lines to the config:
recv:
placeholder:
encryption: off
The zrepl user guide says: For encrypted-send-to-untrusted-receiver, the placeholder datasets need to be created with -o encryption=off. This doesn’t mean the data gets transferred or stored unencrypted.
After setting everything up I ‘ve got:
FILTER
┌────────────┐╔══════════════════════════════════════════════════════════════════════╗
│jobs │║Job: client_to_master ║
│└──client_to│║Type: push ▒║
│ │║ ▒║
│ │║Replication: ▒║
│ │║ Attempt #1 ▒║
│ │║ Status: done ▒║
│ │║ Last Run: 2026-03-03 21:02:50 +0100 CET (lasted 2s) ▒║
│ │║ tank/storage/vault DONE (step 1/1, 1.6 KiB/624 B) ▒║
│ │║ ▒║
│ │║Pruning Sender: ▒║
│ │║ Status: Done ▒║
│ │║ tank/storage/vault Completed (destroy 1 of 11 snapshots) ▒║
│ │║ ▒║
│ │║Pruning Receiver: ▒║
│ │║ Status: Done ▒║
│ │║ tank skipped: filesystem is placeholder ▒║
│ │║ tank/storage skipped: filesystem is placeholder ▒║
│ │║ tank/storage/vault Completed (destroy 1 of 20 snapshots) ▒║
│ │║ ▒║
└────────────┘╚══════════════════════════════════════════════════════════════════════╝
2026-03-03T21:09:00+01:00Q quit <TAB> switch panes W wrap lines Shift+M toggle nav
It seems the service runs just fine:
root@omnibackup:~# svcs -xv
root@omnibackup:~# zfs list -r -t snapshot storage
NAME USED AVAIL REFER MOUNTPOINT
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_062248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_072248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_082248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_092248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_102248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_112248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_122248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_132248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_142248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_152248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_162248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_172248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_182248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_192248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_200248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_201248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_202248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_203248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_204248_000 8K - 99.8M -
storage/zrepl/sink/client_nas/tank/storage/vault@zrepl_20260303_205248_000 0B - 99.8M -
root@omnibackup:~# Hi and thanks for reading my blog!