abufrejoval
Dabbler
- Joined
- May 9, 2023
- Messages
- 20
Conceptual issues:
I operate various oVirt hyperconverged HCI clusters today, both in my home-lab and in a corporate research lab. They are built from cheap/low-power Atoms and NUCs at home and mostly from left-over hardware in the corporate lab, using Nbase-T Ethernet and a mix of SSD and HDD local storage converted into fault-tolerant distributed storage via Gluster, some replica volumes, some dispersed/erasure coded volumes (tricky with oVirt!): there is no SAN or NFS filers, just servers which is why HCI is so attractive.
oVirt is the upstream variant of Redhat virtualization and basically a vSphere/Nutanix look-alike that originally mostly did VM orchestration using KVM from Qumranet on SAN or NFS storage and was then modded into a kind of HCI (hyperconverged infrastructure) using things like GlusterFS when Nutanix made that popular and also included features like VDO, a de-duplication and compression technology from Permabit, which ZFS people typically regard as just another option, not as a separate product or layer.
oVrit/RHEV was built from quite a few Redhat acquisitions, been refactored some years ago in Ansible and never achieved full integration or maturity, one of many reasons it's now being discontinued as a commercial offering while the community project is still hoping to find community leaders.
That's extremely inconvenient, because now that I managed to get it stable enough to use, I was hoping to use oVirt for as long as I lived but Redhat has EoL'ed most of the components and derailed CentOS.
I believe Proxmox is doing a similar job on VM orchestration, but does not include a native HCI type storage layer, there used to be Linstor for that, the two companies are less than 2km of each other in Vienna but evidently no longer talking...
And there is also Xcp-NG, a Xen based orchestrator, which is looking for a better way of doing storage now that Gluster seems in dire straights, with all commercial downstream products from Redhat cancelled. I've evaluated that for quite a bit, it's much faster and more stable than oVirt in many ways, but suffers from a lot of technological debt carried over from Citrix and a Linux 4.9 kernel.
In many ways oVirt/Xcp-NG and Proxmox have long ago achieved functionalities which TrueNAS seems seems to be aiming for and I wonder just how much both projects are aware of each other or where exactly TrueNAS is trying to take the SCALE Clusters in terms of functionality?
Are you aiming for HCI, with VM/container orchestration or will you stop clusters at the storage level?
And incidentally, what is the nature of the TrueNAS storage clustering exactly, which protocols are supported and how?
My understanding is that ZFS has no native clustering mechanism, nor is it remote storage which is why things like Lustre exist and single node remote access would be done via NFS or iSCSI, right? And there is no cluster support built into NFS or iSCSI, so high-availability is either part of the appliance or somehow implemented on the client side, correct?
Now there does seem to be some cluster support built into modern CIFS/SMB variants and support for that is the major aim for TrueNAS SCALE clusters. For Linux clients TrueNAS clusters are simply relying on GlusterFS, with all the good and the bad that implies: do I read all that correctly or I missing something important?
Practical issues:
I've bootstrapped three VMs with the latest TrueNAS SCALE to try create a cluster and I've launched a Docker container with TrueCommand to manage it.
I managed to overcome the limitation that the current TrueCommand container only works with TrueNAS nodes that use "root" as the admin account and am now stuck in the cluster creation wizard, where it can't find cluster interfaces on the three VMs.
Evidently it wants distinct East-West interfaces there, but even after I added those to the VMs, they wouldn't become selectable and I've not found any documentation, hits or logfiles which allow me to understand what the wizard is looking for.
I operate various oVirt hyperconverged HCI clusters today, both in my home-lab and in a corporate research lab. They are built from cheap/low-power Atoms and NUCs at home and mostly from left-over hardware in the corporate lab, using Nbase-T Ethernet and a mix of SSD and HDD local storage converted into fault-tolerant distributed storage via Gluster, some replica volumes, some dispersed/erasure coded volumes (tricky with oVirt!): there is no SAN or NFS filers, just servers which is why HCI is so attractive.
oVirt is the upstream variant of Redhat virtualization and basically a vSphere/Nutanix look-alike that originally mostly did VM orchestration using KVM from Qumranet on SAN or NFS storage and was then modded into a kind of HCI (hyperconverged infrastructure) using things like GlusterFS when Nutanix made that popular and also included features like VDO, a de-duplication and compression technology from Permabit, which ZFS people typically regard as just another option, not as a separate product or layer.
oVrit/RHEV was built from quite a few Redhat acquisitions, been refactored some years ago in Ansible and never achieved full integration or maturity, one of many reasons it's now being discontinued as a commercial offering while the community project is still hoping to find community leaders.
That's extremely inconvenient, because now that I managed to get it stable enough to use, I was hoping to use oVirt for as long as I lived but Redhat has EoL'ed most of the components and derailed CentOS.
I believe Proxmox is doing a similar job on VM orchestration, but does not include a native HCI type storage layer, there used to be Linstor for that, the two companies are less than 2km of each other in Vienna but evidently no longer talking...
And there is also Xcp-NG, a Xen based orchestrator, which is looking for a better way of doing storage now that Gluster seems in dire straights, with all commercial downstream products from Redhat cancelled. I've evaluated that for quite a bit, it's much faster and more stable than oVirt in many ways, but suffers from a lot of technological debt carried over from Citrix and a Linux 4.9 kernel.
In many ways oVirt/Xcp-NG and Proxmox have long ago achieved functionalities which TrueNAS seems seems to be aiming for and I wonder just how much both projects are aware of each other or where exactly TrueNAS is trying to take the SCALE Clusters in terms of functionality?
Are you aiming for HCI, with VM/container orchestration or will you stop clusters at the storage level?
And incidentally, what is the nature of the TrueNAS storage clustering exactly, which protocols are supported and how?
My understanding is that ZFS has no native clustering mechanism, nor is it remote storage which is why things like Lustre exist and single node remote access would be done via NFS or iSCSI, right? And there is no cluster support built into NFS or iSCSI, so high-availability is either part of the appliance or somehow implemented on the client side, correct?
Now there does seem to be some cluster support built into modern CIFS/SMB variants and support for that is the major aim for TrueNAS SCALE clusters. For Linux clients TrueNAS clusters are simply relying on GlusterFS, with all the good and the bad that implies: do I read all that correctly or I missing something important?
Practical issues:
I've bootstrapped three VMs with the latest TrueNAS SCALE to try create a cluster and I've launched a Docker container with TrueCommand to manage it.
I managed to overcome the limitation that the current TrueCommand container only works with TrueNAS nodes that use "root" as the admin account and am now stuck in the cluster creation wizard, where it can't find cluster interfaces on the three VMs.
Evidently it wants distinct East-West interfaces there, but even after I added those to the VMs, they wouldn't become selectable and I've not found any documentation, hits or logfiles which allow me to understand what the wizard is looking for.