c758b77d4a
In nvmet_sq_destroy we capture sq->ctrl early and if it is non-NULL we know that a ctrl was allocated (in the admin connect request handler) and we need to release pending AERs, clear ctrl->sqs and sq->ctrl (for nvme-loop primarily), and drop the final reference on the ctrl. However, a small window is possible where nvmet_sq_destroy starts (as a result of the client giving up and disconnecting) concurrently with the nvme admin connect cmd (which may be in an early stage). But *before* kill_and_confirm of sq->ref (i.e. the admin connect managed to get an sq live reference). In this case, sq->ctrl was allocated however after it was captured in a local variable in nvmet_sq_destroy. This prevented the final reference drop on the ctrl. Solve this by re-capturing the sq->ctrl after all inflight request has completed, where for sure sq->ctrl reference is final, and move forward based on that. This issue was observed in an environment with many hosts connecting multiple ctrls simoutanuosly, creating a delay in allocating a ctrl leading up to this race window. Reported-by: Alex Turin <alex@vastdata.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org> |
||
---|---|---|
.. | ||
Kconfig | ||
Makefile | ||
admin-cmd.c | ||
auth.c | ||
configfs.c | ||
core.c | ||
discovery.c | ||
fabrics-cmd-auth.c | ||
fabrics-cmd.c | ||
fc.c | ||
fcloop.c | ||
io-cmd-bdev.c | ||
io-cmd-file.c | ||
loop.c | ||
nvmet.h | ||
passthru.c | ||
rdma.c | ||
tcp.c | ||
trace.c | ||
trace.h | ||
zns.c |