That's not a problem. It is a pool-creation-time issue of "just sit there and let it do its thing". Once you've done it, it is just fine.
No, it's not at all.
I´ve been testing a server with 10 Intel P3500 NVMes and TRIM has a serious problem, coupled with a too optimistic TRIM handling by ZFS. I've seen some performance degradation when deleting large volumes of data on SSD disks, but with NVME units the problem is really serious.
When releasing blocks, ZFS assumes that the underlying driver will coalesce trim requests, grouping them into large blocks. The ada driver for SATA disks certainly does that, but the nvd driver does not.
I observed a terrible behavior by running bonnie++ benchmarks. Deleting 2 TB of data on a raidz2 pool with the 10 NVMes almost froze I/O for 15 minutes.
The issue is made even worse because ZFS has a write throttle mechanism which, of course, is triggered by the traffic jam caused by the countless trim requests. But even disabling the write throttle and tuning the max active trim requests, etc, I/O is almost halted until the trim requests are completed.
Apart from the lack of trim coalescing in the NVME driver, ZFS should implement a throttling mechanism for TRIM requests as well.
I filed a bug report for FreeBSD, and I'm considering to try to write a fix, but I am not familiar with the code, my only experience with kernel coding is limited to helping to spot problems, and unfortunately my time is very limited.
This is the bug report:
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=209571