wilhelmkeiserII
Cadet
- Joined
- Sep 5, 2017
- Messages
- 8
Is there a proper way to split a ZFS send datastream at destination so that it isn't one gigantic file but rather several files instead? I've attempted to do a zfs send to S3 recently but I discovered that there is a 5TB filesize limit for objects stored on S3.
Someone suggested I zfs send to a local disk first and then split it and then upload those files to S3 but I found this to be a waste of disk space that could be used to store active data.
Someone suggested I zfs send to a local disk first and then split it and then upload those files to S3 but I found this to be a waste of disk space that could be used to store active data.