- Joined
- May 13, 2015
- Messages
- 2,478
Just curious... why are you using this old version of FreeNAS?Updated approach to my VMware NFS use of FreeNAS 9.2.1.7.....
I finally understand more about ZIL (SLOG) etc.. and so I bought
1) Dell XPS 8700 - Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz, with 32GB RAM.
2) 2 x 500 GB Samsung EVO SSDs - Samsung 850 EVO 500GB 2.5-Inch SATA III Internal SSD (MZ-75E500B/AM)
3) 1 x 120 GB Kingston for ZIL - Kingston Digital 120GB SSDNow V300 SATA 3 2.5 Solid State Drive (SV300S37A/120G)
Steps:
1) Used web GUI to configured the 2 x 500 GB SSDs as Mirror zpool with lz4 (initial default compression 6.58x).
2) Used web GUI to configure the 1 x 120GB Kingstone as ZIL (Pool) - and then used GUI to detach it (leaving formatting in place). *I understand the ZIL does not need nearly 120GB, but it was only $50 and I have no other use for that SSD.
3) I used command line to attach the ZIL to the zpool
# zpool status
pool: aeraidz
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
aeraidz ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gptid/cd5048df-b7ec-11e6-8d53-6805ca4185e3 ONLINE 0 0 0
gptid/cd6475c8-b7ec-11e6-8d53-6805ca4185e3 ONLINE 0 0 0
logs
ada1p2 ONLINE 0 0 0
4) I then setup NFS Share and mounted to my ESXI 5.5.0 hosts - with sync=default
[root@aenas3 /]# zfs get sync
NAME PROPERTY VALUE SOURCE
aeraidz sync standard default
aeraidz/.system sync standard default
aeraidz/.system/cores sync standard default
aeraidz/.system/rrd sync standard default
aeraidz/.system/samba4 sync standard default
aeraidz/.system/syslog sync standard default
5) Then I did a migration - of a 160GB (Provisioned, 72GB Used Storage) VM and WO HO... I have 90MBs / second write.....
5a) FreeNAS network .... almost fully saturated 1GB network
View attachment 14786
5b) ada1 = ZIL. ada2/ada3 = Mirror zpool
View attachment 14787
5c) Here's the VMware write performance in KBps of the migration
View attachment 14785
6) I did additional migrations and experimented with sync=disabled, and sync=always but it didn't materially change the write performance on other VMware Migrations.
Another data point - I have another FreeNAS box
- Intel(R) Core(TM)2 Quad CPU Q9400 @ 2.66GHz with 8GB RAM.
- Single (striped zpool) Samsung 850 EVO 500GB SSD.
- NFS to VMware
With sync=disabled (very risky) I get 90GB /sec BUT only 7GB with sync=always. 7GB/sec is just too slow for practical VMware use.
CONCLUSION:
- It looks lik" the ZIL really works for the VMware / NFS case. Amazing after a couple of years of fooling with this that a simple ZIL addition seems to have made the difference between usable performance and non-usable performance for VMware on NFS.
- I think I have a fully 'sane' FreeNAS setup with sync=standard and reasonable VMware NFS performance. And with compression, that 500GB easily extends to 1TB+.... of space for my VMs.
I'd be interested in comments - particularly if I have not actually achieved an OK/Safe (e.g. ZFS meta data preserved, sync=standard) solution for an adequate level of VMware NFS performance.
Regarding your system... The Dell XPS 8700 is a desktop PC that doesn't support ECC RAM, which is important for data integrity.
The Kingston SSD isn't a very good choice for a SLOG device. A good SLOG device needs integral capacitor power backup, low latency, fast writes, and very high write endurance. A good entry-level SSD SLOG device is the Intel DC S3700/S3710. A better choice is the Intel DC P3700, an NVMe device... but these are pricey. You can get an Intel 750 (also an NVMe device) for less money, but it's not going to perform as well as the P3700.
To be honest, you may not be gaining much benefit from a ZIL SLOG device. Your pool is built using SSDs, which are pretty darned fast to begin with...
If you're just running a lab environment, you'll get the best performance by setting sync=disabled on your NFS-based VM datastore. I did this for over a year before adding an Intel DC S3700 SLOG device.
Here are some related threads:
https://forums.freenas.org/index.php?threads/some-insights-into-slog-zil-with-zfs-on-freenas.13633/
https://forums.freenas.org/index.ph...csi-vs-nfs-performance-testing-results.46553/
Good luck!