Alibuba
Cadet
- Joined
- Jan 30, 2021
- Messages
- 6
Hi everyone,
I'm planning on retiring my aging CentOS system, and migrating existing data and hard disk drives to a newly built (albeit from old spares) ESXi host.
In addition to the Linux server I'm running a NAS box, used mostly for backups, and a desktop with a Windows Storage Space. My plan is to get rid of these individual storage pools, and use all the drives for FreeNAS.
FreeNAS will be providing NFS shares for DVR storage, and a few ESXi virtual machines, SMB shares for Windows and macOS hosts, and preferably Time Machine storage for the Macs as well. I/O load will be miniscule.
The current configuration of the drives is:
CentOS Linux-server: 5*3TB RAID5
NAS box: 4*4TB RAID5
Windows 10 dekstop: 4*3TB Mirrored storage space pool
Spare drives: 1*3TB
All the drives are WD Reds, with years of power on hours (between 40000 and 68000 hours for the 24/7 drives, significantly less for the Windows drives).
I will be able to suffle data around so that I would be able to free up disks from two of the three systems.
The ESXi server is running dual X5550 CPUs with 48GB of ECC RAM and two LSI2008 HBAs. ESXi boots from an old, but unused, 120GB SSD drive.
I will be able to purchase additional drives (e.g. SSD for FreeNAS and ESXi, and HDD for FreeNAS) as needed.
My questions thusfar are:
1. Even though the SMART statuses for all of the drives show a clean bill of health, would it be risky running such old drives on ZFS specifically?
I have replaced a single drive in the RAID5 MD array during the past 7 years, so the disks have served me well. I will be migrating to newer and larger drives eventually, but I feel like there's a couple of years left in the current drives.
2. What would be the ideal / optimal layout for the pool and vdevs?
I've been reading up on ZFS documentation, and playing around with dozens of virtual disks on my FreeNAS VM, but I'm still having a tough time wrapping my head around all this. My initial, misguided plan was to create an initial pool with 5 disks in RAIDZ2, and then expand the vdev with more disks.
Since this cannot be done, I've been testing the scenario where I would first create pool with two 5*3TB RAIDZ2 vdevs, then get one more 4TB drive and create a third 5*4TB RAIDZ5 vdev to the pool.
Is this at all a sensible approach, or should I think things over?
Thanks a bunch in advance for any and all suggestions!
I'm planning on retiring my aging CentOS system, and migrating existing data and hard disk drives to a newly built (albeit from old spares) ESXi host.
In addition to the Linux server I'm running a NAS box, used mostly for backups, and a desktop with a Windows Storage Space. My plan is to get rid of these individual storage pools, and use all the drives for FreeNAS.
FreeNAS will be providing NFS shares for DVR storage, and a few ESXi virtual machines, SMB shares for Windows and macOS hosts, and preferably Time Machine storage for the Macs as well. I/O load will be miniscule.
The current configuration of the drives is:
CentOS Linux-server: 5*3TB RAID5
NAS box: 4*4TB RAID5
Windows 10 dekstop: 4*3TB Mirrored storage space pool
Spare drives: 1*3TB
All the drives are WD Reds, with years of power on hours (between 40000 and 68000 hours for the 24/7 drives, significantly less for the Windows drives).
I will be able to suffle data around so that I would be able to free up disks from two of the three systems.
The ESXi server is running dual X5550 CPUs with 48GB of ECC RAM and two LSI2008 HBAs. ESXi boots from an old, but unused, 120GB SSD drive.
I will be able to purchase additional drives (e.g. SSD for FreeNAS and ESXi, and HDD for FreeNAS) as needed.
My questions thusfar are:
1. Even though the SMART statuses for all of the drives show a clean bill of health, would it be risky running such old drives on ZFS specifically?
I have replaced a single drive in the RAID5 MD array during the past 7 years, so the disks have served me well. I will be migrating to newer and larger drives eventually, but I feel like there's a couple of years left in the current drives.
2. What would be the ideal / optimal layout for the pool and vdevs?
I've been reading up on ZFS documentation, and playing around with dozens of virtual disks on my FreeNAS VM, but I'm still having a tough time wrapping my head around all this. My initial, misguided plan was to create an initial pool with 5 disks in RAIDZ2, and then expand the vdev with more disks.
Since this cannot be done, I've been testing the scenario where I would first create pool with two 5*3TB RAIDZ2 vdevs, then get one more 4TB drive and create a third 5*4TB RAIDZ5 vdev to the pool.
Is this at all a sensible approach, or should I think things over?
Thanks a bunch in advance for any and all suggestions!