8. Storage

The Storage section of the graphical interface allows configuration of these options:

8.1. Volumes

The Volumes section of the FreeNAS® graphical interface is used to format volumes, attach a disk to copy data onto an existing volume, or import a ZFS volume. It is also used to create ZFS datasets and zvols and to manage their permissions.

Note

In ZFS terminology, groups of storage devices managed by ZFS are referred to as a pool. The FreeNAS® graphical interface uses the term volume to refer to a ZFS pool.

Proper storage design is important for any NAS. Please read through this entire chapter before configuring storage disks. Features are described to help make it clear which are beneficial for particular uses, and caveats or hardware restrictions which limit usefulness.

8.1.1. Volume Manager

Before creating a volume, determine the level of required redundancy, how many disks will be added, and if any data exists on those disks. Creating a volume overwrites disk data, so save any required data to different media before adding disks to a pool. Refer to the ZFS Primer for information on ZFS redundancy with multiple disks before using Volume Manager. It is important to realize that different layouts of virtual devices (vdevs) affect which operations can be performed on that volume later. For example, drives can be added to a mirror to increase redundancy, but that is not possible with RAIDZ arrays.

To create a volume, click Storage ‣ Volumes ‣ Volume Manager. This opens a screen like the example shown in Figure 8.1.1.

_images/storage-volman.png

Fig. 8.1.1 Creating a ZFS Pool Using Volume Manager

Table 8.1.1 summarizes the configuration options of this screen.

Table 8.1.1 ZFS Volume Creation Options
Setting Value Description
Volume name string ZFS volumes must conform to these naming conventions. Choosing a memorable name that sticks out in the logs and avoiding generic names, like data or freenas, is recommended.
Volume to extend drop-down menu Extend an existing ZFS pool. See Extending a ZFS Volume for more details.
Encryption checkbox See the warnings in Encryption before enabling encryption.
Available disks display Display the number and size of available disks. Hover over show to list the available device names and click the + to add all of the disks to the pool.
Volume layout drag and drop Click and drag the icon to select the desired number of disks for a vdev. When at least one disk is selected, the layouts supported by the selected number of disks are added to the drop-down menu.
Add Extra Device button Configure multiple vdevs or add log or cache devices during pool creation.
Manual setup button Create a pool manually, which is not recommended. See Manual Setup for details.

Click the Volume name * field and enter a name for the pool. Ensure that the chosen name conforms to these naming conventions.

If the underlying disks need to be encrypted as a protection against physical theft, enable the Encryption option.

Warning

Refer to the warnings in Encryption before enabling encryption!

Drag the slider to select the desired number of disks. Volume Manager displays the resulting storage capacity, taking reserved swap space into account. To change the layout or the number of disks, drag the slider to the desired volume layout. The Volume layout drop-down menu can also be clicked if a different level of redundancy is required.

Note

For performance and capacity reasons, this screen does not allow creating a volume from disks of differing sizes. While it is not recommended, it is possible to create a volume of differently-sized disks with the Manual setup button. Follow the instructions in Manual Setup.

Volume Manager only allows choosing a configuration if enough disks have been selected to create that configuration. These layouts are supported:

  • Stripe: requires at least one disk
  • Mirror: requires at least two disks
  • RAIDZ1: requires at least three disks
  • RAIDZ2: requires at least four disks
  • RAIDZ3: requires at least five disks
  • log device: requires at least one dedicated device, a fast, low-latency, power-protected SSD is recommended
  • cache device: requires at least one dedicated device, SSD is recommended

When more than five disks are used, consideration must be given to the optimal layout for the best performance and scalability. An overview of the recommended disk group sizes as well as more information about log and cache devices can be found in the ZFS Primer.

The Add Volume button warns that existing data will be cleared. In other words, creating a new volume reformats the selected disks. To preserve existing data, click the Cancel button and refer to Import Disk and Import Volume to see if the existing format is supported. If so, perform that action instead. If the current storage format is not supported, it is necessary to back up the data to external media, format the disks, then restore the data to the new volume.

Depending on the size and number of disks, the type of controller, and whether encryption is selected, creating the volume may take some time. After the volume is created, the screen refreshes and the new volume is listed in the tree under Storage ‣ Volumes. Click the + next to the volume name to access Change Permissions, Create Dataset, and Create zvol options for that volume.

8.1.1.1. Encryption

Note

The encryption facility used by FreeNAS® is designed to protect against physical theft of the disks. It is not designed to protect against unauthorized software access. Ensure that only authorized users have access to the administrative GUI and that proper permissions are set on shares if sensitive data is stored on the system.

FreeNAS® supports GELI full disk encryption for ZFS volumes. It is important to understand the details when considering whether encryption is right for your FreeNAS® system:

  • FreeNAS® encryption is different from the encryption used in Oracle’s proprietary, non-open source version of ZFS.

  • In FreeNAS®, entire disks are encrypted, not individual filesystems. Encrypted devices are created from the underlying drives, then the volume (pool) is created on top of the encrypted devices. Data is encrypted as it is written and decrypted as it is read.

  • This type of encryption is primarily useful for users storing sensitive data but wanting the ability to remove disks from the pool without having to first wipe the disk contents.

  • The FreeNAS® encryption design is only suitable for safe disposal of disks independent of the encryption key. As long as the key and the disks are intact, the system is vulnerable to being decrypted. The key should be protected by a strong passphrase and any backups of the key should be securely stored.

  • If the encryption key is lost, the data on the disks is inaccessible. Always back up the key!

  • Encryption keys are per ZFS volume (pool). Each pool has a separate encryption key. Technical details about how encryption keys are used, stored, and managed within FreeNAS® are described in this forum post.

  • Data in memory, including ARC, is not encrypted. ZFS data on disk, including ZIL and SLOG, are encrypted if the underlying disks are encrypted. Swap data on disk is always encrypted.

    Warning

    Data stored in Cache (L2ARC) drives is not encrypted. Do not use Cache (L2ARC) with encrypted volumes.

  • At present, there is no one-step way to encrypt an existing, unencrypted volume. Instead, the data must be backed up, the existing pool destroyed, a new encrypted volume created, and the backup restored to the new volume.

  • Hybrid pools are not supported. Added vdevs must match the existing encryption scheme. Volume Manager automatically encrypts a new vdev being added to an existing encrypted pool.

To create an encrypted volume, enable the Encryption option shown in Figure 8.1.1. A pop-up message shows a reminder that it is extremely important to make a backup of the key. Without the key, the data on the disks is inaccessible. See Managing Encrypted Volumes for instructions.

8.1.1.2. Encryption Performance

Encryption performance depends upon the number of disks encrypted. The more drives in an encrypted volume, the more encryption and decryption overhead, and the greater the impact on performance. Encrypted volumes composed of more than eight drives can suffer severe performance penalties. If encryption is desired, please benchmark such volumes before using them in production.

Note

Processors with support for the AES-NI instruction set are strongly recommended. These processors can handle encryption of a small number of disks with negligible performance impact. They also retain performance better as the number of disks increases. Older processors without the AES-NI instructions see significant performance impact with even a single encrypted disk. This forum post compares the performance of various processors.

8.1.1.3. Manual Setup

The Manual Setup button shown in Figure 8.1.1 can be used to create a ZFS volume manually. While this is not recommended, it can, for example, be used to create a non-optimal volume containing disks of different sizes.

Note

The usable space of each disk in a volume is limited to the size of the smallest disk in the volume. Because of this, creating volumes with disks of the same size through the Volume Manager is recommended.

Figure 8.1.2 shows the Manual Setup screen. Table 8.1.2 shows the available options.

_images/manual.png

Fig. 8.1.2 Manually Creating a ZFS Volume

Note

Because of the disadvantages of creating volumes with disks of different sizes, the displayed list of disks is sorted by size.

Table 8.1.2 Manual Setup Options
Setting Value Description
Volume name string ZFS volumes must conform to these naming conventions. Choosing a unique, memorable name is recommended.
Volume to extend drop-down menu Extend an existing ZFS pool. See Extending a ZFS Volume for more details.
Encryption checkbox See the warnings in Encryption before using encryption.
Member disks list Highlight desired number of disks from list of available disks. Hold Ctrl and click a highlighted item to de-select it. Selecting a member disk removes it from the ZFS Extra list.
Deduplication drop-down menu Choices are Off, Verify, and On. Carefully consider the section on Deduplication before changing this setting.
ZFS Extra bullet selection Specify disk usage: storage (None), a log device, a cache device, or a spare. Choosing a value other than None removes the disk from the Member disks list`.

8.1.1.4. Extending a ZFS Volume

The Volume to extend drop-down menu in Storage ‣ Volumes ‣ Volume Manager, shown in Figure 8.1.1, is used to add disks to an existing ZFS volume to increase capacity. This menu is empty if there are no ZFS volumes yet.

If more than one disk is added, the arrangement of the new disks into stripes, mirrors, or RAIDZ vdevs can be specified. Mirrors and RAIDZ arrays provide redundancy for data protection if an individual drive fails.

Note

If the existing volume is encrypted, a warning message shows a reminder that extending a volume resets the passphrase and recovery key. After extending the volume, immediately recreate both using the instructions in Managing Encrypted Volumes.

After an existing volume has been selected from the drop-down menu, drag and drop the desired disks and select the desired volume layout. For example, disks can be added to increase the capacity of the volume.

When adding disks to increase the capacity of a volume, ZFS supports the addition of virtual devices, or vdevs, to an existing ZFS pool. A vdev can be a single disk, a stripe, a mirror, a RAIDZ1, RAIDZ2, or a RAIDZ3. After a vdev is created, more drives cannot be added to that vdev. However, a new vdev can be striped with another of the same type of existing vdev to increase the overall size of the volume. Extending a volume often involves striping similar vdevs. Here are some examples:

  • to extend a ZFS stripe, add one or more disks. Since there is no redundancy, disks do not have to be added in the same quantity as the existing stripe.
  • to extend a ZFS mirror, add the same number of drives. The resulting striped mirror is a RAID 10. For example, if ten new drives are available, a mirror of two drives could be created initially, then extended by creating another mirror of two drives, and repeating three more times until all ten drives have been added.
  • to extend a three drive RAIDZ1, add three additional drives. The result is a RAIDZ+0, similar to RAID 50 on a hardware controller.
  • to extend a RAIDZ2 requires a minimum of four additional drives. The result is a RAIDZ2+0, similar to RAID 60 on a hardware controller.

If an attempt is made to add a non-matching number of disks to the existing vdev, an error message appears, indicating the number of disks that are required. Select the correct number of disks to continue.

8.1.1.4.1. Adding L2ARC or SLOG Devices

Storage ‣ Volumes ‣ Volume Manager (see Figure 8.1.1) is also used to add L2ARC or SLOG SSDs to improve volume performance for specific use cases. Refer to the ZFS Primer to determine if the system will benefit or suffer from the addition of the device.

Once the SSD has been physically installed, click the Volume Manager button and choose the volume from the Volume to extend drop-down menu. Click the + next to the SSD in the Available disks list. In the Volume layout drop-down menu, select Cache (L2ARC) to add a cache device, or Log (ZIL) to add a log device. Finally, click Extend Volume to add the SSD.

8.1.2. Change Permissions

Setting permissions is an important aspect of managing data access. The graphical administrative interface is meant to set the initial permissions for a volume or dataset in order to make it available as a share. Once a share is available, the client operating system should be used to fine-tune the permissions of the files and directories that are created by the client.

Sharing contains configuration examples for several types of permission scenarios. This section provides an overview of the options available for configuring the initial set of permissions.

Note

For users and groups to be available, they must either be first created using the instructions in Account or imported from a directory service using the instructions in Directory Services. If more than 50 users or groups are available, the drop-down menus described in this section will automatically truncate their display to 50 for performance reasons. In this case, start to type in the desired user or group name so that the display narrows its search to matching results.

After a volume or dataset is created, it is listed by its mount point name in Storage ‣ Volumes. Clicking the Change Permissions icon for a specific volume or dataset displays the screen shown in Figure 8.1.3. Table 8.1.3 summarizes the options in this screen.

_images/perms1.png

Fig. 8.1.3 Changing Permissions on a Volume or Dataset

Table 8.1.3 Options When Changing Permissions
Setting Value Description
Apply Owner (user) checkbox Deselect to prevent new permission change from being applied to Owner (user), see Note below.
Owner (user) drop-down menu Select the user to control the volume or dataset. Users manually created or imported from a directory service will appear in the drop-down menu.
Apply Owner (group) checkbox Deselect to prevent new permission change from being applied to Owner (group), see Note below.
Owner (group) drop-down menu Select the group to control the volume or dataset. Groups manually created or imported from a directory service will appear in the drop-down menu.
Apply Mode checkbox Deselect to prevent new permission change from being applied to Mode, see Note below.
Mode checkboxes Only applies to the Unix or Mac “Permission Type” so will be grayed out if Windows is selected.
Permission Type bullet selection Select the type which matches the type of client accessing the volume or dataset. Choices are Unix, Mac or Windows.
Set permission recursively checkbox If enabled, permissions will also apply to subdirectories of the volume or dataset. If data already exists on the volume or dataset, change the permissions on the client side to prevent a performance lag.

Note

The Apply Owner (user), Apply Owner (group), and Apply Mode options allow fine-tuning of the change permissions behavior. By default, all options are enabled and FreeNAS® resets the owner, group, and mode when the Change button is clicked. These options allow choosing which settings to change. For example, to change just the Owner (group) setting, deselect the Apply Owner (user) and Apply Mode options.

The Windows Permission Type is used for Windows (SMB) Shares or when the FreeNAS® system is a member of an Active Directory domain. This type adds ACLs to traditional Unix permissions. When the Windows Permission Type is set, ACLs are set to the Windows defaults for new files and directories. A Windows client can be used to further fine-tune permissions as needed. After a volume or dataset has been set to Windows, it cannot be changed to Unix permissions because that would clobber the extended permissions provided by Windows ACLs.

The Unix Permission Type is usually used with Unix (NFS) Shares. Unix permissions are compatible with most network clients and generally work well with a mix of operating systems or clients. However, Unix permissions do not support Windows ACLs and should not be used with Windows (SMB) Shares.

The Mac Permission Type can be used with Apple (AFP) Shares.

8.1.3. Create Dataset

An existing ZFS volume can be divided into datasets. Permissions, compression, deduplication, and quotas can be set on a per-dataset basis, allowing more granular control over access to storage data. Like a folder or directory, permissions can be set on dataset. Datasets are also similar to filesystems in that properties such as quotas and compression can be set, and snapshots created.

Note

ZFS provides thick provisioning using quotas and thin provisioning using reserved space.

Selecting an existing ZFS volume in the tree and clicking Create Dataset shows the screen in Figure 8.1.4.

_images/storage-dataset.png

Fig. 8.1.4 Creating a ZFS Dataset

Table 8.1.4 shows the options available when creating a dataset. Some settings are only available in Advanced Mode. To see these settings, either click the Advanced Mode button, or configure the system to always display advanced settings by enabling the box Show advanced fields by default option in System ‣ Advanced. Most attributes, except for the Dataset Name, Case Sensitivity, and Record Size, can be changed after dataset creation by highlighting the dataset name and clicking the Edit Options button in Storage ‣ Volumes.

Table 8.1.4 ZFS Dataset Options
Setting Value Description
Dataset Name string Enter a mandatory unique name for the dataset.
Comments string Enter optional comments or notes about this dataset.
Sync drop-down menu Sets the data write synchronization: Inherit inherits the sync settings from the parent dataset; Standard uses the sync settings that have been requested by the client software; Always always waits for data writes to complete; Disabled never waits for writes to complete.
Compression Level drop-down menu Refer to the section on Compression for a description of the available algorithms.
Share type drop-down menu Select the type of share that will be used on the dataset. Choices are UNIX for an NFS share, Windows for a SMB share, or Mac for an AFP share.
Enable atime Inherit, On, or Off Choose On to update the access time for files when they are read. Choose Off to prevent producing log traffic when reading files. This can result in significant performance gains.
Quota for this dataset integer Only available in Advanced Mode. Default of 0 disables quotas; specifying a value means to use no more than the specified size and is suitable for user datasets to prevent users from hogging available space.
Quota for this dataset and all children integer Only available in Advanced Mode. A specified value applies to both this dataset and any child datasets.
Reserved space for this dataset integer Only available in Advanced Mode. Default of 0 is unlimited; specifying a value means to keep at least this much space free and is suitable for datasets containing logs which could take up all available free space.
Reserved space for this dataset and all children integer Only available in Advanced Mode. A specified value applies to both this dataset and any child datasets.
ZFS Deduplication drop-down menu Read the section on Deduplication before making a change to this setting.
Read-Only drop-down menu Only available in Advanced Mode. Choices are Inherit (off), On, or Off.
Exec drop-down menu Only available in Advanced Mode. Choices are Inherit (on), On, or Off.
Record Size drop-down menu Only available in Advanced Mode. While ZFS automatically adapts the record size dynamically to adapt to data, if the data has a fixed size, matching that size can result in better performance.
Case Sensitivity drop-down menu Sensitive is the default and assumes filenames are case sensitive. Insensitive assumes filenames are not case sensitive. Mixed understands both types of filenames.

After a dataset is created, click on that dataset and select Create Dataset, to create a nested dataset, or a dataset within a dataset. A zvol can also be created within a dataset. When creating datasets, double-check that you are using the Create Dataset option for the intended volume or dataset. If you get confused when creating a dataset on a volume, click all existing datasets to close them–the remaining Create Dataset will be for the volume.

8.1.3.1. Deduplication

Deduplication is the process of ZFS transparently reusing a single copy of duplicated data to save space. Depending on the amount of duplicate data, deduplicaton can improve storage capacity, as less data is written and stored. However, deduplication is RAM intensive. A general rule of thumb is 5 GB of RAM per terabyte of deduplicated storage. In most cases, compression provides storage gains comparable to deduplication with less impact on performance.

In FreeNAS®, deduplication can be enabled during dataset creation. Be forewarned that there is no way to undedup the data within a dataset once deduplication is enabled, as disabling deduplication has NO EFFECT on existing data. The more data written to a deduplicated dataset, the more RAM it requires. When the system starts storing the DDTs (dedup tables) on disk because they no longer fit into RAM, performance craters. Further, importing an unclean pool can require between 3-5 GB of RAM per terabyte of deduped data, and if the system does not have the needed RAM, it will panic. The only solution is to add more RAM or recreate the pool. Think carefully before enabling dedup! This article provides a good description of the value versus cost considerations for deduplication.

Unless a lot of RAM and a lot of duplicate data is available, do not change the default deduplication setting of “Off”. For performance reasons, consider using compression rather than turning this option on.

If deduplication is changed to On, duplicate data blocks are removed synchronously. The result is that only unique data is stored and common components are shared among files. If deduplication is changed to Verify, ZFS will do a byte-to-byte comparison when two blocks have the same signature to make sure that the block contents are identical. Since hash collisions are extremely rare, Verify is usually not worth the performance hit.

Note

After deduplication is enabled, the only way to disable it is to use the zfs set dedup=off dataset_name command from Shell. However, any data that has already been deduplicated will not be un-deduplicated. Only newly stored data after the property change will not be deduplicated. The only way to remove existing deduplicated data is to copy all of the data off of the dataset, set the property to off, then copy the data back in again. Alternately, create a new dataset with ZFS Deduplication left disabled, copy the data to the new dataset, and destroy the original dataset.

Tip

Deduplication is often considered when using a group of very similar virtual machine images. However, other features of ZFS can provide dedup-like functionality more efficiently. For example, create a dataset for a standard VM, then clone a snapshot of that dataset for other VMs. Only the difference between each created VM and the main dataset are saved, giving the effect of deduplication without the overhead.

8.1.3.2. Compression

When selecting a compression type, you need to balance performance with the amount of disk space saved by compression. Compression is transparent to the client and applications as ZFS automatically compresses data as it is written to a compressed dataset or zvol and automatically decompresses that data as it is read. These compression algorithms are supported:

  • lz4: default and recommended compression method as it allows compressed datasets to operate at near real-time speed. This algorithm only compresses the files that will benefit from compression.
  • gzip: varies from levels 1 to 9 where gzip fastest (level 1) gives the least compression and gzip maximum (level 9) provides the best compression but is discouraged due to its performance impact.
  • zle: fast but simple algorithm which eliminates runs of zeroes.
  • lzjb: provides decent data compression, but is considered deprecated as lz4 provides much better performance.

If you select Off as the Compression level when creating a dataset or zvol, compression will not be used on that dataset/zvol. This is not recommended as using lz4 has a negligible performance impact and allows for more storage capacity.

8.1.4. Create zvol

A zvol is a feature of ZFS that creates a raw block device over ZFS. The zvol can be used as an iSCSI device extent.

To create a zvol, select an existing ZFS volume or dataset from the tree then click Create zvol to open the screen shown in Figure 8.1.5.

_images/storage-zvol.png

Fig. 8.1.5 Creating a Zvol

The configuration options are described in Table 8.1.5. Some settings are only available in Advanced Mode. To see these settings, either click the Advanced Mode button or configure the system to always display these settings by enabling Show advanced fields by default in System ‣ Advanced.

Table 8.1.5 zvol Configuration Options
Setting Value Description
zvol Name string Enter a short name for the zvol. Using a zvol name longer than 63-characters can prevent accessing zvols as devices. For example, a zvol with a 70-character filename or path cannot be used as an iSCSI extent. This setting is mandatory.
Comments string Enter any notes about this zvol.
Size for this zvol integer Specify size and value such as 10Gib. If the size is more than 80% of the available capacity, the creation will fail with an “out of space” error unless Force size is also enabled.
Force size checkbox By default, the system will not create a zvol if that operation will bring the pool to over 80% capacity. While NOT recommended, enabling this option will force the creation of the zvol.
Compression level drop-down menu Refer to the section on Compression for a description of the available algorithms.
Sparse volume checkbox Used to provide thin provisioning. Use with caution for when this option is selected, writes will fail when the pool is low on space.
Block size drop-down menu Only available in Advanced Mode and by default is based on the number of disks in pool. Can be set to match the block size of the filesystem which will be formatted onto the iSCSI target.

8.1.5. Import Disk

The Volume ‣ Import Disk screen, shown in Figure 8.1.6, is used to import a single disk that has been formatted with the UFS, NTFS, MSDOS, or EXT2 filesystem. The import is meant to be a temporary measure to copy the data from a disk to an existing ZFS dataset. Only one disk can be imported at a time.

Note

Imports of EXT3 or EXT4 filesystems are possible in some cases, although neither is fully supported. EXT3 journaling is not supported, so those filesystems must have an external fsck utility, like the one provided by E2fsprogs utilities, run on them before import. EXT4 filesystems with extended attributes or inodes greater than 128 bytes are not supported. EXT4 filesystems with EXT3 journaling must have an fsck run on them before import, as described above.

_images/storage-import.png

Fig. 8.1.6 Importing a Disk

Use the drop-down menu to select the disk to import, select the type of filesystem on the disk, and browse to the ZFS dataset that will hold the copied data. When you click Import Volume, the disk is mounted, its contents are copied to the specified ZFS dataset, and the disk is unmounted after the copy operation completes.

8.1.6. Import Volume

Click Storage ‣ Volumes ‣ Import Volume to configure FreeNAS® to use an existing ZFS pool. This action is typically performed when an existing FreeNAS® system is re-installed. Since the operating system is separate from the storage disks, a new installation does not affect the data on the disks. However, the new operating system needs to be configured to use the existing volume.

Figure 8.1.7 shows the initial pop-up window that appears when a volume is imported.

_images/auto1.png

Fig. 8.1.7 Initial Import Volume Screen

If importing an unencrypted ZFS pool, select No: Skip to import to open the screen shown in Figure 8.1.8.

_images/auto2.png

Fig. 8.1.8 Importing a Non-Encrypted Volume

Existing volumes should be available for selection from the drop-down menu. In the example shown in Figure 8.1.8, the FreeNAS® system has an existing, unencrypted ZFS pool. Once the volume is selected, click the OK button to import the volume.

If an existing ZFS pool does not show in the drop-down menu, run zpool import from Shell to import the pool.

If physically installing ZFS formatted disks from another system, ensure to export the drives on that system to prevent an “in use by another machine” error during the import.

If the hardware is not being detected, run camcontrol devlist from Shell. If the disk does not appear in the output, check to see if the controller driver is supported or if it needs to be loaded using Tunables.

8.1.6.1. Importing an Encrypted Pool

Disks in existing GELI-encrypted ZFS pools must be decrypted before importing the pool. In the Import Volume dialog shown in Figure 8.1.7, select Yes: Decrypt disks. The screen shown in Figure 8.1.9 is then displayed.

_images/decrypt.png

Fig. 8.1.9 Decrypting Disks Before Importing a ZFS Pool

Select the disks in the encrypted pool, browse to the location of the saved encryption key, enter the passphrase associated with the key, then click OK to decrypt the disks.

Note

The encryption key is required to decrypt the pool. If the pool cannot be decrypted, it cannot be re-imported after a failed upgrade or lost configuration. This means that it is very important to save a copy of the key and to remember the passphrase that was configured for the key. Refer to Managing Encrypted Volumes for instructions on how to manage the keys for encrypted volumes.

After the pool is decrypted, it appears in the drop-down menu of Figure 8.1.8. Click the OK button to finish the volume import.

Note

For security reasons, GELI keys for encrypted volumes are not saved in a configuration backup file. When FreeNAS® has been installed to a new device and a saved configuration file restored to it, the GELI keys for encrypted disks will not be present, and the system will not request them. To correct this, export the encrypted volume with Detach Volume, making sure that the Mark the disks as new (destroy data) or Also delete the share’s configuration) are not selected. Then import the volume again. During the import, the GELI keys can be entered as described above.

8.1.7. View Disks

Storage ‣ Volumes ‣ View Disks shows all of the disks recognized by the FreeNAS® system. An example is shown in Figure 8.1.10.

_images/view.png

Fig. 8.1.10 Viewing Disks

The current configuration of each device is displayed. Click a disk entry and the Edit button to change its configuration. The configurable options are described in Table 8.1.6.

Table 8.1.6 Disk Options
Setting Value Description
Name string This is the FreeBSD device name of the disk.
Serial string This is the serial number of the disk.
Description string Enter any notes about this disk.
HDD Standby drop-down menu Indicates the time of inactivity (in minutes) before the drive enters standby mode in order to conserve energy. This forum post demonstrates how to determine if a drive has spun down.
Advanced Power Management drop-down menu Select a power management profile from the menu. Default is Disabled.
Acoustic Level drop-down menu Modify for disks that understand AAM. Default is Disabled.
Enable S.M.A.R.T. checkbox Enabled by default if the disk supports S.M.A.R.T. Deselect to disable any configured S.M.A.R.T. Tests for the disk.
S.M.A.R.T. extra options string Enter additional smartctl(8) options.
Password for SED string Enter and confirm the password that will be used for this device instead of the global SED password. Refer to Self-Encrypting Drives for more information.

Note

If the serial number of a disk is not displayed in this screen, use the smartctl command from Shell. For example, to determine the serial number of disk ada0, type smartctl -a /dev/ada0 | grep Serial.

The Wipe function is provided for when an unused disk is to be discarded.

Warning

Make certain that all data has been backed up and that the disk is no longer in use. Triple-check that the correct disk is being selected to be wiped, as recovering data from a wiped disk is usually impossible. If there is any doubt, physically remove the disk, verify that all data is still present on the FreeNAS® system, and wipe the disk in a separate computer.

Clicking Wipe offers several choices. Quick erases only the partitioning information on a disk, making it easy to reuse but without clearing other old data. For more security, Full with zeros overwrites the entire disk with zeros, while Full with random data overwrites the entire disk with random binary data.

Quick wipes take only a few seconds. A Full with zeros wipe of a large disk can take several hours, and a Full with random data takes longer. A progress bar is displayed during the wipe to track status.

8.1.8. Volumes

Storage ‣ Volumes is used to view and further configure existing ZFS pools, datasets, and zvols. The example shown in Figure 8.1.11 shows one ZFS pool (volume1) with two datasets (the one automatically created with the pool, volume1, and dataset1) and one zvol (zvol1).

Note that in this example, there are two datasets named volume1. The first represents the ZFS pool and its Used and Available entries reflect the total size of the pool, including disk parity. The second represents the implicit or root dataset and its Used and Available entries indicate the amount of disk space available for storage.

Buttons are provided for quick access to Volume Manager, Import Disk, Import Volume, and View Disks. If the system has multipath-capable hardware, an extra button will be added, View Multipaths. For each entry, the columns indicate the Name, how much disk space is Used, how much disk space is Available, the type of Compression, the Compression Ratio, the Status, whether it is mounted as read-only, and any Comments entered for the volume.

_images/storage-volumes.png

Fig. 8.1.11 Viewing Volumes

Clicking the entry for a pool causes several buttons to appear at the bottom of the screen.

Detach Volume: allows exporting the pool or deleting the contents of the pool, depending upon the choice made in the screen shown in Figure 8.1.12. The Detach Volume screen displays the current used space and indicates whether there are any shares, provides options to Mark the disks as new (destroy data) and to Also delete the share’s configuration, and asks if you are sure about doing this. The browser window turns red to indicate that some choices will make the data inaccessible. When the option to select the disks as new is left deselected, the volume is exported. The data is not destroyed and the volume can be re-imported at a later time. When moving a ZFS pool from one system to another, perform this export action first as it flushes any unwritten data to disk, writes data to the disk indicating that the export was done, and removes all knowledge of the pool from the system.

When the option to mark the disks as new is selected, the pool and all the data in its datasets, zvols, and shares is destroyed and the individual disks are returned to their raw state. Desired data must be backed up to another disk or device before using this option.

_images/storage-detach.png

Fig. 8.1.12 Detach or Delete a Volume

Scrub Volume: scrubs and scheduling them are described in more detail in Scrubs. This button allows manually initiating a scrub. Scrubs are I/O intensive and can negatively impact performance. Avoid initiating a scrub when the system is busy.

A Cancel button is provided to cancel a scrub. When a scrub is cancelled, it is abandoned. The next scrub to run starts from the beginning, not where the cancelled scrub left off.

The status of a running scrub or the statistics from the last completed scrub can be seen by clicking the Volume Status button.

Volume Status: as shown in the example in Figure 8.1.13, this screen shows the device name and status of each disk in the ZFS pool as well as any read, write, or checksum errors. It also indicates the status of the latest ZFS scrub. Clicking the entry for a device causes buttons to appear to edit the device options (shown in Figure 8.1.14), offline or online the device, or replace the device (as described in Replacing a Failed Drive).

Upgrade: used to upgrade the pool to the latest ZFS features, as described in Upgrading a ZFS Pool. This button does not appear if the pool is running the latest version of feature flags.

_images/storage-volstatus.png

Fig. 8.1.13 Volume Status

Selecting a disk in Volume Status and clicking its Edit Disk button shows the screen in Figure 8.1.14. Table 8.1.6 summarizes the configurable options.

_images/disk.png

Fig. 8.1.14 Editing a Disk

Note

Versions of FreeNAS® prior to 8.3.1 required a reboot to apply changes to the HDD Standby, Advanced Power Management, and Acoustic Level settings. As of 8.3.1, changes to these settings are applied immediately.

Clicking a dataset in Storage ‣ Volumes causes buttons to appear at the bottom of the screen, providing these options:

Change Permissions: edit the dataset permissions as described in Change Permissions.

Create Snapshot: create a one-time snapshot. To schedule the regular creation of snapshots, instead use Periodic Snapshot Tasks.

Promote Dataset: only applies to clones. When a clone is promoted, the origin filesystem becomes a clone of the clone making it possible to destroy the filesystem that the clone was created from. Otherwise, a clone cannot be destroyed while the origin filesystem exists.

Destroy Dataset: clicking the Destroy Dataset button causes the browser window to turn red to indicate that this is a destructive action. The Destroy Dataset screen forces you to enable the option I’m aware this will destroy all child datasets and snapshots within this dataset before it will perform this action.

Edit Options: edit the volume properties described in Table 8.1.4. Note that it will not allow changing the dataset name.

Create Dataset: used to create a child dataset within this dataset.

Create zvol: create a child zvol within this dataset.

Clicking a zvol in Storage ‣ Volumes causes icons to appear at the bottom of the screen: Create Snapshot, Promote Dataset, Edit zvol, and Destroy zvol. Similar to datasets, a zvol name cannot be changed, and destroying a zvol requires confirmation.

8.1.8.1. Managing Encrypted Volumes

FreeNAS® generates and stores a randomized encryption key whenever a new encrypted volume is created. This key is required to read and decrypt any data on the volume.

Encryption keys can also be downloaded as a safety measure, to allow decryption on a different system in the event of failure, or to allow the locally stored key to be deleted for extra security. Encryption keys can also be optionally protected with a passphrase for additional security. The combination of encryption key location and whether a passphrase is used provide several different security scenarios:

  • Key stored locally, no passphrase: the encrypted volume is decrypted and accessible when the system running. Protects “data at rest” only.
  • Key stored locally, with passphrase: the encrypted volume is not accessible until the passphrase is entered by the FreeNAS® administrator.
  • Key not stored locally: the encrypted volume is not accessible until the FreeNAS® administrator provides the key. If a passphrase is set on the key, it must also be entered before the encrypted volume can be accessed (two factor authentication).

Encrypted data cannot be accessed when the disks are removed or the system has been shut down. On a running system, encrypted data cannot be accessed when the volume is locked (see below) and the key is not available. If the key is protected with a passphrase, both the key and passphrase are required for decryption.

Encryption applies to a volume, not individual users. When a volume is unlocked, data is accessible to all users with permissions to access it.

Note

GELI uses two randomized encryption keys for each disk. The first has been discussed here. The second, the disk’s “master key”, is encrypted and stored in the on-disk GELI metadata. Loss of a disk master key due to disk corruption is equivalent to any other disk failure, and in a redundant pool, other disks will contain accessible copies of the uncorrupted data. While it is possible to separately back up disk master keys, it is usually not necessary or useful.

8.1.8.2. Additional Controls for Encrypted Volumes

If the Encryption option is enabled during the creation of a pool, additional buttons appear in the entry for the volume in Storage ‣ Volumes. An example is shown in Figure 8.1.15.

_images/storage-encrypted.png

Fig. 8.1.15 Encryption Icons Associated with an Encrypted Volume

These additional encryption buttons are used to:

Create/Change Passphrase: set and confirm a passphrase associated with the GELI encryption key. The desired passphrase is entered and repeated for verification. A red warning is a reminder to Remember to add a new recovery key as this action invalidates the previous recovery key. Unlike a password, a passphrase can contain spaces and is typically a series of words. A good passphrase is easy to remember (like the line to a song or piece of literature) but hard to guess (people who know you should not be able to guess the passphrase). Remember this passphrase. An encrypted volume cannot be reimported without it. In other words, if the passphrase is forgotten, the data on the volume can become inaccessible if it becomes necessary to reimport the pool. Protect this passphrase, as anyone who knows it could reimport the encrypted volume, thwarting the reason for encrypting the disks in the first place.

_images/encrypt-passphrase.png

Fig. 8.1.16 Add or Change a Passphrase to an Encrypted Volume

After the passphrase is set, the name of this button changes to Change Passphrase. After setting or changing the passphrase, it is important to immediately create a new recovery key by clicking the Add recovery key button. This way, if the passphrase is forgotten, the associated recovery key can be used instead.

Encrypted volumes with a passphrase display an additional lock button:

_images/encrypt-lock.png

Fig. 8.1.17 Lock Button

These encrypted volumes can be locked. The data is not accessible until the volume is unlocked by suppying the passphrase or encryption key, and the button changes to an unlock button:

_images/encrypt-unlock.png

Fig. 8.1.18 Unlock Button

To unlock the volume, click the unlock button to display the Unlock dialog:

_images/encrypt-unlock-dialog.png

Fig. 8.1.19 Unlock Locked Volume

Unlock the volume by entering a passphrase or using the Browse button to load the recovery key. If both a passphrase and a recovery key are entered, only the passphrase is used. By default, the services listed will restart when the volume is unlocked. This allows them to see the new volume and share or access data on it. Individual services can be prevented from restarting by deselecting them. However, a service that is not restarted might not be able to access the unlocked volume.

Download Key: download a backup copy of the GELI encryption key. The encryption key is saved to the client system, not on the FreeNAS® system. The FreeNAS® administrative password must be entered, then the directory in which to store the key is chosen. Since the GELI encryption key is separate from the FreeNAS® configuration database, it is highly recommended to make a backup of the key. If the key is ever lost or destroyed and there is no backup key, the data on the disks is inaccessible.

Encryption Re-key: generate a new GELI encryption key. Typically this is only performed when the administrator suspects that the current key may be compromised. This action also removes the current passphrase.

Add recovery key: generate a new recovery key. This screen prompts for the FreeNAS® administrative password and then the directory in which to save the key. Note that the recovery key is saved to the client system, not on the FreeNAS® system. This recovery key can be used if the passphrase is forgotten. Always immediately add a recovery key whenever the passphrase is changed.

Remove recovery key: Typically this is only performed when the administrator suspects that the current recovery key may be compromised. Immediately create a new passphrase and recovery key.

Note

The passphrase, recovery key, and encryption key must be protected. Do not reveal the passphrase to others. On the system containing the downloaded keys, take care that the system and its backups are protected. Anyone who has the keys has the ability to re-import the disks if they are discarded or stolen.

Warning

If a re-key fails on a multi-disk system, an alert is generated. Do not ignore this alert as doing so may result in the loss of data.

8.1.9. View Multipaths

FreeNAS® uses gmultipath(8) to provide multipath I/O support on systems containing hardware that is capable of multipath. An example would be a dual SAS expander backplane in the chassis or an external JBOD.

Multipath hardware adds fault tolerance to a NAS as the data is still available even if one disk I/O path has a failure.

FreeNAS® automatically detects active/active and active/passive multipath-capable hardware. Any multipath-capable devices that are detected will be placed in multipath units with the parent devices hidden. The configuration will be displayed in Storage ‣ Volumes ‣ View Multipaths. Note that this option is not be displayed in the Storage ‣ Volumes tree on systems that do not contain multipath-capable hardware.

8.1.10. Replacing a Failed Drive

With any form of redundant RAID, failed drives must be replaced as soon as possible to repair the degraded state of the RAID. Depending on the hardware capabilities, it might be necessary to reboot to replace the failed drive. Hardware that supports AHCI does not require a reboot.

Note

Striping (RAID0) does not provide redundancy. If a disk in a stripe fails, the volume will be destroyed and must be recreated and the data restored from backup.

Note

If the volume is encrypted with GELI, refer to Replacing an Encrypted Drive before proceeding.

Before physically removing the failed device, go to Storage ‣ Volumes. Select the volume name. At the bottom of the interface are several icons, one of which is Volume Status. Click the Volume Status icon and locate the failed disk. Then perform these steps:

  1. Click the disk entry, then its Offline button to change the disk status to OFFLINE. This step removes the device from the ZFS pool and prevents swap issues. If the hardware supports hot-pluggable disks, click the disk’s Offline button and pull the disk, then skip to step 3. If there is no Offline button but only a Replace button, the disk is already offlined and this step can be skipped.

    Note

    If the process of changing the disk status to OFFLINE fails with a “disk offline failed - no valid replicas” message, the ZFS volume must be scrubbed first with the Scrub Volume button in Storage ‣ Volumes. After the scrub completes, try to Offline the disk again before proceeding.

  2. If the hardware is not AHCI capable, shut down the system to physically replace the disk. When finished, return to the GUI and locate the OFFLINE disk.

  3. After the disk has been replaced and is showing as OFFLINE, click the disk again and then click its Replace button. Select the replacement disk from the drop-down menu and click the Replace Disk button. After clicking the Replace Disk button, the ZFS pool begins resilvering.

  4. After the drive replacement process is complete, re-add the replaced disk in the S.M.A.R.T. Tests screen.

In the example shown in Figure 8.1.20, a failed disk is being replaced by disk ada5 in the volume named volume1.

_images/replace.png

Fig. 8.1.20 Replacing a Failed Disk

After the resilver is complete, Volume Status shows a Completed resilver status and indicates any errors. Figure 8.1.21 indicates that the disk replacement was successful in this example.

Note

A disk that is failing but has not completely failed can be replaced in place, without first removing it. Whether this is a good idea depends on the overall condition of the failing disk. A disk with a few newly-bad blocks that is otherwise functional can be left in place during the replacement to provide data redundancy. A drive that is experiencing continuous errors can actually slow down the replacement. In extreme cases, a disk with serious problems might spend so much time retrying failures that it could prevent the replacement resilvering from completing before another drive fails.

_images/replace2.png

Fig. 8.1.21 Disk Replacement is Complete

8.1.10.1. Replacing an Encrypted Drive

If the ZFS pool is encrypted, additional steps are needed when replacing a failed drive.

First, make sure that a passphrase has been set using the instructions in Encryption before attempting to replace the failed drive. Then, follow the steps 1 and 2 as described above. During step 3, a prompt will appear to input and confirm the passphrase for the pool. Enter this information then click the Replace Disk button. Wait until the resilvering is complete.

Next, restore the encryption keys to the pool. If the following additional steps are not performed before the next reboot, access to the pool might be permanently lost.

  1. Highlight the pool that contains the recently replaced disk and click the Encryption Re-key button in the GUI. Entry of the root password will be required.
  2. Highlight the pool that contains the disk you just replaced and click Create Passphrase and enter the new passphrase. The old passphrase can be reused if desired.
  3. Highlight the pool that contains the recently replaced disk and click the Download Key button to save the new encryption key. Since the old key will no longer function, any old keys can be safely discarded.
  4. Highlight the pool that contains the disk that was just replaced and click the Add Recovery Key button to save the new recovery key. The old recovery key will no longer function, so it can be safely discarded.

8.1.10.2. Removing a Log or Cache Device

Added log or cache devices appear in Storage ‣ Volumes ‣ Volume Status. Clicking the device enables its Replace and Remove buttons.

Log and cache devices can be safely removed or replaced with these buttons. Both types of devices improve performance, and throughput can be impacted by their removal.

8.1.11. Replacing Drives to Grow a ZFS Pool

The recommended method for expanding the size of a ZFS pool is to pre-plan the number of disks in a vdev and to stripe additional vdevs using Volume Manager as additional capacity is needed.

However, this is not an option if there are no open drive ports and a SAS/SATA HBA card cannot be added. In this case, one disk at a time can be replaced with a larger disk, waiting for the resilvering process to incorporate the new disk into the pool, then repeating with another disk until all of the original disks have been replaced.

The safest way to perform this is to use a spare drive port or an eSATA port and a hard drive dock. The process follows these steps:

  1. Shut down the system.
  2. Install one new disk.
  3. Start up the system.
  4. Go to Storage ‣ Volumes, select the pool to expand and click the Volume Status button. Select a disk and click the Replace button. Choose the new disk as the replacement.
  5. The status of the resilver process can be viewed by running zpool status. When the new disk has resilvered, the old one will be automatically offlined. The system is then shut down to physically remove the replaced disk. One advantage of this approach is that there is no loss of redundancy during the resilver.

If a spare drive port is not available, a drive can be replaced with a larger one using the instructions in Replacing a Failed Drive. This process is slow and places the system in a degraded state. Since a failure at this point could be disastrous, do not attempt this method unless the system has a reliable backup. Replace one drive at a time and wait for the resilver process to complete on the replaced drive before replacing the next drive. After all the drives are replaced and the final resilver completes, the added space will appear in the pool.

8.1.12. Hot Spares

ZFS provides the ability to have “hot” spares. These are drives that are connected to a volume, but not in use. If the volume experiences the failure of a data drive, the system uses the hot spare as a temporary replacement. If the failed drive is replaced with a new drive, the hot spare drive is no longer needed and reverts to being a hot spare. If the failed drive is instead removed from the volume, the spare is promoted to a full member of the volume.

Hot spares can be added to a volume during or after creation. On FreeNAS®, hot spare actions are implemented by zfsd(8).

8.2. Periodic Snapshot Tasks

A periodic snapshot task allows scheduling the creation of read-only versions of ZFS volumes and datasets at a given point in time. Snapshots can be created quickly and, if little data changes, new snapshots take up very little space. For example, a snapshot where no files have changed takes 0 MB of storage, but as changes are made to files, the snapshot size changes to reflect the size of the changes.

Snapshots provide a clever way of keeping a history of files, providing a way to recover an older copy or even a deleted file. For this reason, many administrators take snapshots often (perhaps every fifteen minutes), store them for a period of time (possibly a month), and store them on another system (typically using Replication Tasks). Such a strategy allows the administrator to roll the system back to a specific point in time. If there is a catastrophic loss, an off-site snapshot can be used to restore the system up to the time of the last snapshot.

An existing ZFS volume is required before creating a snapshot. Creating a volume is described in Volume Manager.

To create a periodic snapshot task, click Storage ‣ Periodic Snapshot Tasks ‣ Add Periodic Snapshot which opens the screen shown in Figure 8.2.1. Table 8.2.1 summarizes the fields in this screen.

Note

If only a one-time snapshot is needed, instead use Storage ‣ Volumes and click the Create Snapshot button for the volume or dataset to snapshot.

_images/storage-periodic-snapshot.png

Fig. 8.2.1 Creating a Periodic Snapshot

Table 8.2.1 Options When Creating a Periodic Snapshot
Setting Value Description
Volume/Dataset drop-down menu Select an existing ZFS volume, dataset, or zvol.
Recursive checkbox Enable this option to take separate snapshots of the volume or dataset and each of its child datasets. If unchecked, a single snapshot is taken of only the specified volume/dataset, but not any child datasets.
Snapshot Lifetime integer and drop-down menu Define a length of time to retain the snapshot on this system. After the time expires, the snapshot is removed. Snapshots replicated to other systems are not affected.
Begin drop-down menu Choose the hour and minute when the system can begin taking snapshots.
End drop-down menu Choose the hour and minute when the system will stop taking snapshots.
Interval drop-down menu Define how often the system takes snapshots between Begin and End times.
Weekday checkboxes Choose the days of the week to take snapshots.
Enabled checkbox Unset to disable this task without deleting it.

If the Recursive option is enabled, child datasets of this dataset are included in the snapshot and there is no need to create snapshots for each child dataset. The downside is that there is no way to exclude particular child datasets from a recursive snapshot.

Click the OK button to save the task. Entries for each task are shown in View Periodic Snapshot Tasks. Click an entry to display Edit and Delete buttons for it.

8.3. Replication Tasks

Replication is the duplication of snapshots from one FreeNAS® system to another computer. When a new snapshot is created on the source computer, it is automatically replicated to the destination computer. Replication is typically used to keep a copy of files on a separate system, with that system sometimes being at a different physical location.

The basic configuration requires a source system with the original data and a destination system where the data will be replicated. The destination system is prepared to receive replicated data, a periodic snapshot of the data on the source system is created, and then a replication task is created. As snapshots are automatically created on the source computer, they are automatically replicated to the destination computer.

Note

Replicated data is not visible on the receiving system until the replication task completes.

Note

The target dataset on the receiving system is automatically created in read-only mode to protect the data. To mount or browse the data on the receiving system, create a clone of the snapshot and use the clone. Clones are created in read/write mode, making it possible to browse or mount them. See Snapshots for more information on creating clones.

8.3.1. Examples: Common Configuration

The examples shown here use the same setup of source and destination computers.

8.3.1.1. Alpha (Source)

Alpha is the source computer with the data to be replicated. It is at IP address 10.0.0.102. A volume named alphavol has already been created, and a dataset named alphadata has been created on that volume. This dataset contains the files which will be snapshotted and replicated onto Beta.

This new dataset has been created for this example, but a new dataset is not required. Most users will already have datasets containing the data they wish to replicate.

Create a periodic snapshot of the source dataset by selecting Storage ‣ Periodic Snapshot Tasks. Click the alphavol/alphadata dataset to highlight it. Create a periodic snapshot of it by clicking Periodic Snapshot Tasks, then Add Periodic Snapshot as shown in Figure 8.3.1.

This example creates a snapshot of the alphavol/alphadata dataset every two hours from Monday through Friday between the hours of 9:00 and 18:00 (6:00 PM). Snapshots are automatically deleted after their chosen lifetime of two weeks expires.

_images/replication3a.png

Fig. 8.3.1 Create a Periodic Snapshot for Replication

8.3.1.2. Beta (Destination)

Beta is the destination computer where the replicated data will be copied. It is at IP address 10.0.0.118. A volume named betavol has already been created.

Snapshots are transferred with SSH. To allow incoming connections, this service is enabled on Beta. The service is not required for outgoing connections, and so does not need to be enabled on Alpha.

8.3.2. Example: FreeNAS® to FreeNAS® Semi-Automatic Setup

FreeNAS® offers a special semi-automatic setup mode that simplifies setting up replication. Create the replication task on Alpha by clicking Replication Tasks and Add Replication. alphavol/alphadata is selected as the dataset to replicate. betavol is the destination volume where alphadata snapshots are replicated. The Setup mode dropdown is set to Semi-automatic as shown in Figure 8.3.2. The IP address of Beta is entered in the Remote hostname field. A hostname can be entered here if local DNS resolves for that hostname.

Note

If WebGUI HTTP –> HTTPS Redirect has been enabled in System ‣ General on the destination computer, Remote HTTP/HTTPS Port must be set to the HTTPS port (usually 443) and Remote HTTPS must be enabled when creating the replication on the source computer.

_images/replication6.png

Fig. 8.3.2 Add Replication Dialog, Semi-Automatic

The Remote Auth Token field expects a special token from the Beta computer. On Beta, choose Storage ‣ Replication Tasks, then click Temporary Auth Token. A dialog showing the temporary authorization token is shown as in Figure 8.3.3.

Highlight the temporary authorization token string with the mouse and copy it.

_images/replication7.png

Fig. 8.3.3 Temporary Authentication Token on Destination

On the Alpha system, paste the copied temporary authorization token string into the Remote Auth Token field as shown in Figure 8.3.4.

_images/replication8.png

Fig. 8.3.4 Temporary Authentication Token Pasted to Source

Finally, click the OK button to create the replication task. After each periodic snapshot is created, a replication task will copy it to the destination system. See Limiting Replication Times for information about restricting when replication is allowed to run.

Note

The temporary authorization token is only valid for a few minutes. If a Token is invalid message is shown, get a new temporary authorization token from the destination system, clear the Remote Auth Token field, and paste in the new one.

8.3.3. Example: FreeNAS® to FreeNAS® Dedicated User Replication

A dedicated user can be used for replications rather than the root user. This example shows the process using the semi-automatic replication setup between two FreeNAS® systems with a dedicated user named repluser. SSH key authentication is used to allow the user to log in remotely without a password.

In this example, the periodic snapshot task has not been created yet. If the periodic snapshot shown in the example configuration has already been created, go to Storage ‣ Periodic Snapshot Tasks, click on the task to select it, and click Delete to remove it before continuing.

On Alpha, select Account ‣ Users. Click the Add User. Enter repluser for Username, enter /mnt/alphavol/repluser in the Create Home Directory In field, enter Replication Dedicated User for the Full Name, and set the Disable password login option. Leave the other fields at their default values, but note the User ID number. Click OK to create the user.

On Beta, the same dedicated user must be created as was created on the sending computer. Select Account ‣ Users. Click the Add User. Enter the User ID number from Alpha, repluser for Username, enter /mnt/betavol/repluser in the Create Home Directory In field, enter Replication Dedicated User for the Full Name, and set the Disable password login option. Leave the other fields at their default values. Click OK to create the user.

A dataset with the same name as the original must be created on the destination computer, Beta. Select Storage ‣ Volumes, click on betavol, then click the Create Dataset icon at the bottom. Enter alphadata as the Dataset Name, then click Add Dataset.

The replication user must be given permissions to the destination dataset. Still on Beta, open a Shell and enter this command:

zfs allow -ldu repluser create,destroy,diff,mount,readonly,receive,release,send,userprop betavol/alphadata

The destination dataset must also be set to read-only. Enter this command in the Shell:

zfs set readonly=on betavol/alphadata

Close the Shell by typing exit and pressing Enter.

The replication user must also be able to mount datasets. Still on Beta, go to System ‣ Tunables. Click Add Tunable. Enter vfs.usermount for the Variable, 1 for the Value, and choose Sysctl from the Type drop-down. Click OK to save the tunable settings.

Back on Alpha, create a periodic snapshot of the source dataset by selecting Storage ‣ Periodic Snapshot Tasks. Click the alphavol/alphadata dataset to highlight it. Create a periodic snapshot of it by clicking Periodic Snapshot Tasks, then Add Periodic Snapshot as shown in Figure 8.3.1.

Still on Alpha, create the replication task by clicking Replication Tasks and Add Replication. alphavol/alphadata is selected as the dataset to replicate. betavol/alphadata is the destination volume and dataset where alphadata snapshots are replicated.

The Setup mode dropdown is set to Semi-automatic as shown in Figure 8.3.2. The IP address of Beta is entered in the Remote hostname field. A hostname can be entered here if local DNS resolves for that hostname.

Note

If WebGUI HTTP –> HTTPS Redirect has been enabled in System ‣ General on the destination computer, Remote HTTP/HTTPS Port must be set to the HTTPS port (usually 443) and Remote HTTPS must be enabled when creating the replication on the source computer.

The Remote Auth Token field expects a special token from the Beta computer. On Beta, choose Storage ‣ Replication Tasks, then click Temporary Auth Token. A dialog showing the temporary authorization token is shown as in Figure 8.3.3.

Highlight the temporary authorization token string with the mouse and copy it.

On the Alpha system, paste the copied temporary authorization token string into the Remote Auth Token field as shown in Figure 8.3.4.

Set the Dedicated User option. Choose repluser in the Dedicated User drop-down.

Click the OK button to create the replication task.

Note

The temporary authorization token is only valid for a few minutes. If a Token is invalid message is shown, get a new temporary authorization token from the destination system, clear the Remote Auth Token field, and paste in the new one.

Replication will begin when the periodic snapshot task runs.

Additional replications can use the same dedicated user that has already been set up. The permissions and read only settings made through the Shell must be set on each new destination dataset.

8.3.4. Example: FreeNAS® to FreeNAS® or Other Systems, Manual Setup

This example uses the same basic configuration of source and destination computers shown above, but the destination computer is not required to be a FreeNAS® system. Other operating systems can receive the replication if they support SSH, ZFS, and the same features that are in use on the source system. The details of creating volumes and datasets, enabling SSH, and copying encryption keys will vary when the destination computer is not a FreeNAS® system.

8.3.4.1. Encryption Keys

A public encryption key must be copied from Alpha to Beta to allow a secure connection without a password prompt. On Alpha, select Storage ‣ Replication Tasks ‣ View Public Key, producing the window shown in Figure 8.3.5. Use the mouse to highlight the key data shown in the window, then copy it.

_images/replication1c.png

Fig. 8.3.5 Copy the Replication Key

On Beta, select Account ‣ Users ‣ View Users. Click the root account to select it, then click Modify User. Paste the copied key into the SSH Public Key field and click OK as shown in Figure 8.3.6.

_images/replication4.png

Fig. 8.3.6 Paste the Replication Key

Back on Alpha, create the replication task by clicking Replication Tasks and Add Replication. alphavol/alphadata is selected as the dataset to replicate. The destination volume is betavol. The alphadata dataset and snapshots are replicated there. The IP address of Beta is entered in the Remote hostname field as shown in Figure 8.3.7. A hostname can be entered here if local DNS resolves for that hostname.

Click the SSH Key Scan button to retrieve the SSH host keys from Beta and fill the Remote hostkey field. Finally, click OK to create the replication task. After each periodic snapshot is created, a replication task will copy it to the destination system. See Limiting Replication Times for information about restricting when replication is allowed to run.

_images/replication5.png

Fig. 8.3.7 Add Replication Dialog

8.3.5. Replication Options

Table 8.3.1 describes the options in the replication task dialog.

Table 8.3.1 Replication Task Options
Setting Value Description
Volume/Dataset drop-down menu On the source computer with snapshots to replicate, choose an existing ZFS pool or dataset with an active periodic snapshot task.
Remote ZFS Volume/Dataset string Enter the ZFS volume or dataset on the remote or destination computer which will store the snapshots. Example: poolname/datasetname, not the mount point or filesystem path.
Recursively replicate child dataset’s snapshots checkbox When enabled, include snapshots of child datasets from the primary dataset.
Delete stale snapshots checkbox Set to delete previous snapshots from the remote or destination system which are no longer present on the source computer.
Replication Stream Compression drop-down menu Choices are lz4 (fastest), pigz (all rounder), plzip (best compression), or Off (no compression). Selecting a compression algorithm can reduce the size of the data being replicated.
Limit (kB/s) integer Limit replication speed to the specified value in kilobits/second. Default of 0 is unlimited.
Begin drop-down menu Define a time to start the replication task.
End drop-down menu Define the point in time by which replication must start. A started replication task continues until it is finished.
Enabled checkbox Deselect to disable the scheduled replication task without deleting it.
Setup mode drop-down menu Choose the configuration mode for the remote. Choices are Manual or Semi-automatic. Note semi-automatic only works with remote version 9.10.2 or later.
Remote hostname string Enter the IP address or DNS name of remote system to receive the replication data.
Remote port string Enter the port number used by the SSH server on the remote or destination computer.
Dedicated User Enabled checkbox Select to use an account other than root for replication.
Dedicated User drop-down menu Only available if Dedicated User Enabled is checked. Select the user account to be used for replication.
Encryption Cipher drop-down menu Standard, Fast, or Disabled.
Remote hostkey string Use the SSH Key Scan button to retrieve the public host key of the remote or destination computer and populate this field with that key.

The replication task runs after a new periodic snapshot is created. The periodic snapshot and any new manual snapshots of the same dataset are replicated onto the destination computer.

When multiple replications have been created, replication tasks run serially, one after another. Completion time depends on the number and size of snapshots and the bandwidth available between the source and destination computers.

The first time a replication runs, it must duplicate data structures from the source to the destination computer. This can take much longer to complete than subsequent replications, which only send differences in data.

Warning

Snapshots record incremental changes in data. If the receiving system does not have at least one snapshot that can be used as a basis for the incremental changes in the snapshots from the sending system, there is no way to identify only the data that has changed. In this situation, the snapshots in the receiving system target dataset are removed so a complete initial copy of the new replicated data can be created.

Selecting Storage ‣ Replication Tasks displays Figure 8.3.8, the list of replication tasks. The Last snapshot sent to remote side column shows the name of the last snapshot that was successfully replicated, and Status shows the current status of each replication task. The display is updated every five seconds, always showing the latest status.

_images/replication9a.png

Fig. 8.3.8 Replication Task List

Note

The encryption key that was copied from the source computer (Alpha) to the destination computer (Beta) is an RSA public key located in the /data/ssh/replication.pub file on the source computer. The host public key used to identify the destination computer (Beta) is from the /etc/ssh/ssh_host_rsa_key.pub file on the destination computer.

8.3.6. Replication Encryption

The default Encryption Cipher Standard setting provides good security. Fast is less secure than Standard but can give reasonable transfer rates for devices with limited cryptographic speed. For networks where the entire path between source and destination computers is trusted, the Disabled option can be chosen to send replicated data without encryption.

8.3.7. Limiting Replication Times

The Begin and End times in a replication task make it possible to restrict when replication is allowed. These times can be set to only allow replication after business hours, or at other times when disk or network activity will not slow down other operations like snapshots or Scrubs. The default settings allow replication to occur at any time.

These times control when replication task are allowed to start, but will not stop a replication task that is already running. Once a replication task has begun, it will run until finished.

8.3.8. Troubleshooting Replication

Replication depends on SSH, disks, network, compression, and encryption to work. A failure or misconfiguration of any of these can prevent successful replication.

8.3.8.1. SSH

SSH must be able to connect from the source system to the destination system with an encryption key. This can be tested from Shell by making an SSH connection from the source system to the destination system. From the previous example, this is a connection from Alpha to Beta at 10.0.0.118. Start the Shell on the source machine (Alpha), then enter this command:

ssh -vv -i /data/ssh/replication 10.0.0.118

On the first connection, the system might say

No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)?

Verify that this is the correct destination computer from the preceding information on the screen and type yes. At this point, an SSH shell connection is open to the destination system, Beta.

If a password is requested, SSH authentication is not working. See Figure 8.3.5 above. This key value must be present in the /root/.ssh/authorized_keys file on Beta, the destination computer. The /var/log/auth.log file can show diagnostic errors for login problems on the destination computer also.

8.3.8.2. Compression

Matching compression and decompression programs must be available on both the source and destination computers. This is not a problem when both computers are running FreeNAS®, but other operating systems might not have lz4, pigz, or plzip compression programs installed by default. An easy way to diagnose the problem is to set Replication Stream Compression to Off. If the replication runs, select the preferred compression method and check /var/log/debug.log on the FreeNAS® system for errors.

8.3.8.3. Manual Testing

On Alpha, the source computer, the /var/log/messages file can also show helpful messages to locate the problem.

On the source computer, Alpha, open a Shell and manually send a single snapshot to the destination computer, Beta. The snapshot used in this example is named auto-20161206.1110-2w. As before, it is located in the alphavol/alphadata dataset. A @ symbol separates the name of the dataset from the name of the snapshot in the command.

zfs send alphavol/alphadata@auto-20161206.1110-2w | ssh -i /data/ssh/replication 10.0.0.118 zfs recv betavol

If a snapshot of that name already exists on the destination computer, the system will refuse to overwrite it with the new snapshot. The existing snapshot on the destination computer can be deleted by opening a Shell on Beta and running this command:

zfs destroy -R betavol/alphadata@auto-20161206.1110-2w

Then send the snapshot manually again. Snapshots on the destination system, Beta, can be listed from the Shell with zfs list -t snapshot or by going to Storage ‣ Snapshots.

Error messages here can indicate any remaining problems.

8.4. Resilver Priority

Resilvering, or the process of copying data to a replacement disk, is best completed as quickly as possible. Increasing the priority of resilvers can help them to complete more quickly. The Resilver Priority tab makes it possible to increase the priority of resilvering at times where the additional I/O or CPU usage will not affect normal usage. Select Storage ‣ Resilver Priority to display the screen shown in Figure 8.4.1. Table 8.4.1 describes the fields on this screen.

_images/storage-resilver-priority.png

Fig. 8.4.1 Resilver Priority

Table 8.4.1 Resilver Priority Options
Setting Value Description
Enabled checkbox Set to enable higher-priority resilvering.
Begin higher priority resilvering at this time drop-down Start time to begin higher-priority resilvering.
End higher priority resilvering at this time drop-down End time to begin higher-priority resilvering.
Weekday checkboxes Use higher-priority resilvering on these days of the week.

8.5. Scrubs

A scrub is the process of ZFS scanning through the data on a volume. Scrubs help to identify data integrity problems, detect silent data corruptions caused by transient hardware issues, and provide early alerts of impending disk failures. FreeNAS® makes it easy to schedule periodic automatic scrubs.

Each volume should be scrubbed at least once a month. Bit errors in critical data can be detected by ZFS, but only when that data is read. Scheduled scrubs can find bit errors in rarely-read data. The amount of time needed for a scrub is proportional to the quantity of data on the volume. Typical scrubs take several hours or longer.

The scrub process is I/O intensive and can negatively impact performance. Schedule scrubs for evenings or weekends to minimize impact to users. Make certain that scrubs and other disk-intensive activity like S.M.A.R.T. Tests are scheduled to run on different days to avoid disk contention and extreme performance impacts.

Scrubs only check used disk space. To check unused disk space, schedule S.M.A.R.T. Tests of Type Long Self-Test to run once or twice a month.

Scrubs are scheduled and managed with Storage ‣ Scrubs.

When a volume is created, a ZFS scrub is automatically scheduled. An entry with the same volume name is added to Storage ‣ Scrubs. A summary of this entry can be viewed with Storage ‣ Scrubs ‣ View Scrubs. Figure 8.5.1 displays the default settings for the volume named volume1. In this example, the entry has been highlighted and the Edit button clicked to display the Edit screen. Table 8.5.1 summarizes the options in this screen.

_images/storage-scrub.png

Fig. 8.5.1 Viewing Volume Default Scrub Settings

Table 8.5.1 ZFS Scrub Options
Setting Value Description
Volume drop-down menu Choose a volume to be scrubbed.
Threshold days integer Define the number of days to prevent a scrub from running after the last has completed. This ignores any other calendar schedule. The default is a multiple of 7 to ensure that the scrub always occurs on the same day of the week.
Description string Optional text description of scrub.
Minute slider or minute selections If the slider is used, a scrub occurs every N minutes. If specific minutes are chosen, a scrub runs only at the selected minute values.
Hour slider or hour selections If the slider is used, a scrub occurs every N hours. If specific hours are chosen, a scrub runs only at the selected hour values.
Day of Month slider or month selections If the slider is used, a scrub occurs every N days. If specific days of the month are chosen, a scrub runs only on the selected days of the selected months.
Month checkboxes Define the months to run the scrub.
Day of week checkboxes A scrub occurs on the selected days. The default is Sunday to least impact users; note that this field and the Day of Month field are ORed together: setting Day of Month to 01,15 and Day of week to Thursday will cause scrubs to run on the 1st and 15th days of the month, but also on any Thursday.
Enabled checkbox Unset to disable the scheduled scrub without deleting it.

Review the default selections and, if necessary, modify them to meet the needs of the environment. Note that the Threshold field is used to prevent scrubs from running too often, and overrides the schedule chosen in the other fields. Also, if a pool is locked or unmounted when a scrub is scheduled to occur, it will not be scrubbed.

Scheduled scrubs can be deleted with the Delete button, but this is not recommended. Scrubs can provide an early indication of disk issues before a disk failure. If a scrub is too intensive for the hardware, consider temporarily deselecting the Enabled button for the scrub until the hardware can be upgraded.

8.6. Snapshots

Snapshots are scheduled using Storage ‣ Periodic Snapshot Tasks. To view and manage the listing of created snapshots, use Storage ‣ Snapshots. An example listing is shown in Figure 8.6.1.

Note

If snapshots do not appear, check that the current time configured in Periodic Snapshot Tasks does not conflict with the Begin, End, and Interval settings. If the snapshot was attempted but failed, an entry is added to /var/log/messages. This log file can be viewed in Shell.

_images/storage-snapshots1.png

Fig. 8.6.1 Viewing Available Snapshots

The listing includes the name of the volume or dataset, the name of each snapshot, and the amount of used and referenced data.

Used is the amount of space consumed by this dataset and all of its descendants. This value is checked against the dataset quota and reservation. The space used does not include the dataset reservation, but does take into account the reservations of any descendent datasets. The amount of space that a dataset consumes from its parent, as well as the amount of space that are freed if this dataset is recursively destroyed, is the greater of its space used and its reservation. When a snapshot is created, the space is initially shared between the snapshot and the filesystem, and possibly with previous snapshots. As the filesystem changes, space that was previously shared becomes unique to the snapshot, and is counted in the used space of the snapshot. Additionally, deleting snapshots can increase the amount of space unique to (and used by) other snapshots. The amount of space used, available, or referenced does not take into account pending changes. While pending changes are generally accounted for within a few seconds, disk changes do not necessarily guarantee that the space usage information is updated immediately.

Tip

Space used by individual snapshots can be seen by running zfs list -t snapshot from Shell.

Refer indicates the amount of data accessible by this dataset, which may or may not be shared with other datasets in the pool. When a snapshot or clone is created, it initially references the same amount of space as the filesystem or snapshot it was created from, since its contents are identical.

Snapshots have icons on the right side for several actions.

Clone Snapshot prompts for the name of the clone to create. A clone is a writable copy of the snapshot. Since a clone is actually a dataset which can be mounted, it appears in the Volumes tab rather than the Snapshots tab. By default, -clone is added to the name of a snapshot when a clone is created.

Destroy Snapshot a pop-up message asks for confirmation. Child clones must be destroyed before their parent snapshot can be destroyed. While creating a snapshot is instantaneous, deleting a snapshot can be I/O intensive and can take a long time, especially when deduplication is enabled. In order to delete a block in a snapshot, ZFS has to walk all the allocated blocks to see if that block is used anywhere else; if it is not, it can be freed.

The most recent snapshot also has a Rollback Snapshot icon. Clicking the icon asks for confirmation before rolling back to this snapshot state. Confirming by clicking Yes causes any files that have changed since the snapshot was taken to be reverted back to their state at the time of the snapshot.

Note

Rollback is a potentially dangerous operation and causes any configured replication tasks to fail as the replication system uses the existing snapshot when doing an incremental backup. To restore the data within a snapshot, the recommended steps are:

  1. Clone the desired snapshot.
  2. Share the clone with the share type or service running on the FreeNAS® system.
  3. After users have recovered the needed data, destroy the clone in the Active Volumes tab.

This approach does not destroy any on-disk data and has no impact on replication.

A range of snapshots can be selected with the mouse. Click on the option in the left column of the first snapshot, then press and hold Shift and click on the option for the end snapshot. This can be used to select a range of obsolete snapshots to be deleted with the Destroy icon at the bottom. Be cautious and careful when deleting ranges of snapshots.

Periodic snapshots can be configured to appear as shadow copies in newer versions of Windows Explorer, as described in Configuring Shadow Copies. Users can access the files in the shadow copy using Explorer without requiring any interaction with the FreeNAS® graphical administrative interface.

The ZFS Snapshots screen allows the creation of filters to view snapshots by selected criteria. To create a filter, click the Define filter icon (near the text No filter applied). When creating a filter:

  • Select the column or leave the default of Any Column.
  • Select the condition. Possible conditions are: contains (default), is, starts with, ends with, does not contain, is not, does not start with, does not end with, and is empty.
  • Enter a value that meets your view criteria.
  • Click the Filter button to save the filter and exit the define filter screen. Alternately, click the + button to add another filter.

When creating multiple filters, select the filter to use before leaving the define filter screen. After a filter is selected, the No filter applied text changes to Clear filter. Clicking Clear filter produces a pop-up message indicates that this removes the filter and all available snapshots are listed.

Warning

A snapshot and any files it contains will not be accessible or searchable if the mount path of the snapshot is longer than 88 ascii characters. The data within the snapshot will be safe, and the snapshot will become accessible again when the mount path is shortened. For details of this limitation, and how to shorten a long mount path, see Path and Name Lengths.

8.6.1. Browsing a snapshot collection

All snapshots for a dataset are accessible as an ordinary hierarchical filesystem, which can be reached from a hidden .zfs file located at the root of every dataset. A user with permission to access that file can view and explore all snapshots for a dataset like any other files - from the CLI or via File Sharing services such as Samba, NFS and FTP. This is an advanced capability which requires some command line actions to achieve. In summary, the main changes to settings that are required are:

  • Snapshot visibility must be manually enabled in the ZFS properties of the dataset.
  • In Samba auxiliary settings, the veto files command must be modified to not hide the .zfs file, and the setting zfsacl:expose_snapdir=true must be added.

The effect will be that any user who can access the dataset contents, will also be able to view the list of snapshots by navigating to the .zfs directory of the dataset, and to browse and search any files they have permission to access throughout the entire snapshot collection of the dataset. A user’s ability to view files within a snapshot will be limited by any permissions or ACLs set on the files when the snapshot was taken. Snapshots are fixed as “read-only”, so this access does not permit the user to change any files in the snapshots, or to modify or delete any snapshot, even if they had write permission at the time when the snapshot was taken.

Note

ZFS has a zfs diff command which can list the files that have changed between any two snapshot versions within a dataset, or between any snapshot and the current data.

8.7. VMware-Snapshot

Storage ‣ VMware-Snapshot is used to coordinate ZFS snapshots when using FreeNAS® as a VMware datastore. Once this type of snapshot is created, FreeNAS® will automatically snapshot any running VMware virtual machines before taking a scheduled or manual ZFS snapshot of the dataset or zvol backing that VMware datastore. The temporary VMware snapshots are then deleted on the VMware side but still exist in the ZFS snapshot and can be used as stable resurrection points in that snapshot. These coordinated snapshots will be listed in Snapshots.

Figure 8.7.1 shows the menu for adding a VMware snapshot and Table 8.7.1 summarizes the available options.

_images/vmware1a.png

Fig. 8.7.1 Adding a VMware Snapshot

Table 8.7.1 VMware Snapshot Options
Setting Value Description
Hostname string Enter the IP address or hostname of VMware host. When clustering, this is the vCenter server for the cluster.
Username string Enter the username on the VMware host with permission to snapshot virtual machines.
Password string Enter the password associated with Username
ZFS Filesystem drop-down menu Select the filesystem to snapshot.
Datastore drop-down menu After entering the Hostname, Username, and Password, click Fetch Datastores to populate the menu and select the datastore with which to synchronize.