Sorry for the disjointed text but the quoting doesnt really work. Might be my browser.
Really? You've never had fsck do anything but preen, never found any contents in a lost+found directory, never found a mysteriously truncated file?
>> yes maybe but nothing as disastrous as described in the Powerpoint about ZFS.
It's a complexity issue. The design expects to keep bad things from happening to begin with. A ZFS system can be storing a petabyte or more of information, and it would both take a huge amount of memory and also a huge amount of time to do a full "fsck-like" thing on ZFS. Your Linux MDADM, LSI RAID, and ext4 do not have the ability to manage petabyte-sized filesystems.
>> I doubt size is so relevant. You can always split data across filesystems. Practicality, manageable complexity and recover-ability are much more important. Not having recovery tools is just negligent. Who cares if a fsck takes resources. A filesystem should let the user decide if their data is worth that resource.
>> besides ext4 can handle volumes up to 1 Exibibyte (2^60)
Then ext3, ext4, and ffs all need to be labeled as no-gos too, because they also are not portable across "different server environments."
>>They are indeed. I have ported them many times.
How do you build a several petabyte sized filesystem with LVM?
>>who needs petabyte in one single filesystem?
Already discussed.
Really? "force" mounting shouldn't be allowed? There's a reason it is called "force" mounting, it bypasses the safety checks. And I betcha I can dd /dev/zero onto a bunch of MDADM or LVM disks and destroy them, so apparently they're not very good at preventing damaging actions.
>>Thats not what I said. I said force-mounting should not allow to damage the data integrity.
This is why ECC memory is pushed so heavily. But other filesystems can also be damaged by their own repair tools if bad memory is present, so this is a specious argument.
This is just an insane statement. The power of ZFS is that you can easily have a terabyte of RAM and terabytes of L2ARC fronting a petabyte of hard disk storage, and it will be insanely fast; your RAID controller with hardware cache and backup battery can't even begin to do that.
>> I doubt size is so relevant. You can always split data across filesystems. Practicality, managable complexity and recover-ability are maybe more important.
>> besides ext4 can handle volumes up to 1 Exibibyte (2^60)
Well, that's just an opinion, and one not really backed up by any facts.
That's not really an issue, though. ZFS is just designed differently.
But virtually EVERYONE uses compression, so you don't really have your facts straight.
>> Not anyone I work with bothers with compression but I am not complaining if ZFS can do it. I am just saying its not anything special and might not be a feature thats worth all the other downsides.
But no downside either.
iSCSI works fine and is absolutely recommended IF you are willing to play by ZFS's rules. I've said many times that ZFS, being a CoW filesystem, needs lots of resources to make iSCSI work well, but properly resourced, it will make HDD storage perform almost as well as SSD.
>> maybe but that doesnt come out in the Powerpoint. It reads like even with resources and lots of trial and error its a mess. While under Linux it works out of the box and performs pretty well. Again, practicality is important.
And lots of people use it as ESXi datastore in the real world, so I don't know if this is just you making uninformed statements or what. Lots of enterprises use ZFS for their most challenging storage needs where performance on huge amounts of storage is a key consideration, because when you give ZFS a bunch of resources, it will give you storage that is much faster than standard Linux or BSD filesystems.
>> Well the Powerpoint does not encourage the use as ESXI Datastore. So I am not sure if its wise to risk it if some people say yes and some no. At least with Linux filesystems it works and performs 100%