viniciusferrao
Contributor
- Joined
- Mar 30, 2013
- Messages
- 192
Hi guys,
I'm opening a new thread since the old one was blown :)
I have a good ZFS machine: 2x Xeon E5-2620 and 128GB of ECC RAM.
My disks are 24x Seagate SATA 3TB 7200RPM Model: ATA ST3000DM001-1CH1 CC24; they are in stripe of mirrors configuration. So there are 12 vdevs with 2 disks in one zpool.
My SSD's are 2x Kingston V300 120GB SATA Model: SV300S37A120G configured as L2ARC.
I'm really believing that they are slowing up my pool. The point is: how to prove this?
At this moment I'm running some benchmarks. I've removed the SSDs from the disks zpool and created a new zpool with one stripe with 240GB of usable space.
All iozone tests are run with this command:
I've chosen 240GB of size to blow up the ARC on RAM, and 4MB of record size. Why 4MB? I don't know.
Here are some results, first on the SSD pool, and them in the disk pool.
Disk pool:
There are somethings that I really don't get. In sequential tests the pool performed almost the same as the SSDs which is bad in my opinion. I was expecting much more from those SSDs, even this sh***y ones. But at random read we some huge improvement, almost 10 times faster. But it isn't sufficient. I will benchmark a single SSD with iozone to see some better results.
Theres another problem, some serious compression is happening during iozone tests, even with the options "-+w 0 -+y 0 -+C 0" to generate incompressible data.
I'm opening a new thread since the old one was blown :)
I have a good ZFS machine: 2x Xeon E5-2620 and 128GB of ECC RAM.
My disks are 24x Seagate SATA 3TB 7200RPM Model: ATA ST3000DM001-1CH1 CC24; they are in stripe of mirrors configuration. So there are 12 vdevs with 2 disks in one zpool.
My SSD's are 2x Kingston V300 120GB SATA Model: SV300S37A120G configured as L2ARC.
I'm really believing that they are slowing up my pool. The point is: how to prove this?
At this moment I'm running some benchmarks. I've removed the SSDs from the disks zpool and created a new zpool with one stripe with 240GB of usable space.
All iozone tests are run with this command:
Code:
iozone -+w 0 -+y 0 -+C 0 -a -r 4096 -s 240G
I've chosen 240GB of size to blow up the ARC on RAM, and 4MB of record size. Why 4MB? I don't know.
Here are some results, first on the SSD pool, and them in the disk pool.
Code:
Iozone: Performance Test of File I/O Version $Revision: 3.420 $ Compiled for 64 bit mode. Build: freebsd Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa. Run began: Wed Apr 9 00:18:30 2014 Dedup activated 0 percent. Dedupe within & across 0 percent. Dedupe within 0 percent. Auto Mode Record Size 4096 KB File size set to 251658240 KB Command line used: iozone -+w 0 -+y 0 -+C 0 -a -r 4096 -s 240G Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 251658240 4096 2732317 1388588 3317126 3487875 2139369 2165839 2184993 6870975 2309243 2440627 1531569 2423249 2587110
Disk pool:
Code:
[root@storage] /mnt/pool0# iozone -+w 0 -+y 0 -+C 0 -a -r 4096 -s 240G Iozone: Performance Test of File I/O Version $Revision: 3.420 $ Compiled for 64 bit mode. Build: freebsd Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa. Run began: Wed Apr 9 00:43:59 2014 Dedup activated 0 percent. Dedupe within & across 0 percent. Dedupe within 0 percent. Auto Mode Record Size 4096 KB File size set to 251658240 KB Command line used: iozone -+w 0 -+y 0 -+C 0 -a -r 4096 -s 240G Output is in Kbytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 Kbytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride KB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 251658240 4096 2568673 1253497 2887512 3032571 331192 1875399 727754 6479734 1054113 2360520 1215763 1912467 2093973
There are somethings that I really don't get. In sequential tests the pool performed almost the same as the SSDs which is bad in my opinion. I was expecting much more from those SSDs, even this sh***y ones. But at random read we some huge improvement, almost 10 times faster. But it isn't sufficient. I will benchmark a single SSD with iozone to see some better results.
Theres another problem, some serious compression is happening during iozone tests, even with the options "-+w 0 -+y 0 -+C 0" to generate incompressible data.