[OmniOS-discuss] Backing up HUGE zfs volumes

Schweiss, Chip chip at innovates.com
Thu May 21 12:24:56 UTC 2015


I would caution against anything using 'zfs diff'  It has been perpetually
broken, either not working at all, or returning incomplete information.

Avoiding crawling the directory is pretty much impossible unless you use
'zfs send'.   However, as long as there is enough cache on the system,
directory crawls can be very efficient.    I have daily rsync jobs that
crawl over 200 million files.   The impact of the crawl is not noticeable
to other users.

I has also used ZFS send to AWS Glacier.   This worked well until the data
change rate got high enough I need to start over too often to keep the
storage size reasonable on Glacier.

I also use CrashPlan on my home OmniOS server to back up about 5TB.  It
works very nicely.

-Chip

On Wed, May 20, 2015 at 6:51 PM, Michael Talbott <mtalbott at lji.org> wrote:

> I'm trying to find ways of efficiently archiving up some huge (120TB and
> growing) zfs volumes with millions maybe billions of files of all sizes. I
> use zfs send/recv for replication to another box for tier 1/2 recovery.
> But, I'm trying to find a good open source solution that runs on Omni for
> archival purposes that doesn't have to crawl the filesystem or rely on any
> proprietary formats.
>
> I was thinking I could use zfs diff to get a list of changed data, parse
> that into a usable format, create a tar and par of the data, and an
> accompanying plain text index file. From there, upload that set of data to
> a cloud provider. While I could probably script it all out myself to
> accomplish this, I'm hoping someone knows of an existing solution that can
> produce somewhat similar results.
>
> Ideas anyone?
>
> Thanks,
>
> Michael
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150521/5355c0c4/attachment-0001.html>


More information about the OmniOS-discuss mailing list