bsodmike
Dabbler
- Joined
- Sep 5, 2017
- Messages
- 22
Hi all,
I'm running FreeNAS-11.3-RELEASE on my primary node and was planning to upgrade it later this week.
However, I ran into a drive failing, and this is fairly normal. This server has been replacing drives fine for over a year now.
So today, I follow the usual steps. Inserted a new disk.
Visited the Pool UI, and clicked on "Replace" disk. But for the very first time I'm running into this error:
"UNIQUE constraint failed: storage_encrypteddisk.encrypted_provider"
No further details in the system logs.
My pool is a RAIDZ2 and 1-disk has already failed. I guess the next step is to reboot and try replacing the failed disk again, but I didn't want to reboot it till I checked here for advice first. Also, rebooting, I don't want to run into "other" issues, till this is addressed. I'm 1-drive failure away from looking my data :o
Please note, `b7437013-0187-11ea-9ba8-b4969130e724.eli` is the actual drive that failed. The repeated entries under replacing are that of the replacement drive and the failure I get here. I have to go back and "Offline" each of those if I want to try replacing `b7437013-0187-11ea-9ba8-b4969130e724.eli` again.
Appreciate any and all help,
Thanks!!
I'm running FreeNAS-11.3-RELEASE on my primary node and was planning to upgrade it later this week.
However, I ran into a drive failing, and this is fairly normal. This server has been replacing drives fine for over a year now.
So today, I follow the usual steps. Inserted a new disk.
Visited the Pool UI, and clicked on "Replace" disk. But for the very first time I'm running into this error:
"UNIQUE constraint failed: storage_encrypteddisk.encrypted_provider"
No further details in the system logs.
My pool is a RAIDZ2 and 1-disk has already failed. I guess the next step is to reboot and try replacing the failed disk again, but I didn't want to reboot it till I checked here for advice first. Also, rebooting, I don't want to run into "other" issues, till this is addressed. I'm 1-drive failure away from looking my data :o
Please note, `b7437013-0187-11ea-9ba8-b4969130e724.eli` is the actual drive that failed. The repeated entries under replacing are that of the replacement drive and the failure I get here. I have to go back and "Offline" each of those if I want to try replacing `b7437013-0187-11ea-9ba8-b4969130e724.eli` again.
Code:
root@freenas-primary:~ # zpool status -v pool: big-primary state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Wed Oct 14 19:29:59 2020 2.93T scanned at 545M/s, 863G issued at 1.30G/s, 37.1T total 365M resilvered, 2.27% done, 0 days 07:55:52 to go config: NAME STATE READ WRITE CKSUM big-primary DEGRADED 0 0 0 raidz2-0 DEGRADED 0 0 0 gptid/c25bf5f0-0309-11eb-ae8e-b4969130e724.eli ONLINE 0 0 0 gptid/b4cde05e-0187-11ea-9ba8-b4969130e724.eli ONLINE 0 0 0 gptid/b54a0478-0187-11ea-9ba8-b4969130e724.eli ONLINE 0 0 0 gptid/b5bf4293-0187-11ea-9ba8-b4969130e724.eli ONLINE 0 0 0 gptid/b63637ad-0187-11ea-9ba8-b4969130e724.eli ONLINE 0 0 0 gptid/b6bde417-0187-11ea-9ba8-b4969130e724.eli ONLINE 0 0 0 replacing-6 UNAVAIL 0 0 0 12415116992178932106 UNAVAIL 264 421 0 was /dev/gptid/b7437013-0187-11ea-9ba8-b4969130e724.eli 17088220239846819691 OFFLINE 0 0 0 was /dev/gptid/ad373df9-0e23-11eb-ae8e-b4969130e724.eli 2974305596880713330 OFFLINE 0 0 0 was /dev/gptid/71d5c022-0e24-11eb-ae8e-b4969130e724.eli 18340350378657856049 OFFLINE 0 0 0 was /dev/gptid/832a747f-0e25-11eb-ae8e-b4969130e724.eli gptid/b8b2f2b1-0187-11ea-9ba8-b4969130e724.eli ONLINE 0 0 0 errors: No known data errors pool: freenas-boot state: ONLINE scan: scrub repaired 0 in 0 days 00:05:19 with 0 errors on Mon Oct 12 03:50:19 2020 config: NAME STATE READ WRITE CKSUM freenas-boot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 da8p2 ONLINE 0 0 0 da9p2 ONLINE 0 0 0 errors: No known data errors
Appreciate any and all help,
Thanks!!