Lines Matching defs:to

122 	 * If admin_entries was overridden to an invalid value, revert it
123 * back to our default value.
154 * specify a smaller limit, so we need to check the MQES field in the
155 * capabilities register. We have to cap the number of entries to the
176 * No need to have more trackers than entries in the submit queue. Note
185 * not a hard limit and will need to be revisited when the upper layers
239 * No need to disable queues before failing them. Failing is a superet
254 * Wait for RDY to change.
300 * Per 3.1.5 in NVME 1.3 spec, transitioning CC.EN from 0 to 1
301 * when CSTS.RDY is 1 or transitioning CC.EN from 1 to 0 when
382 * that value stored in mps is suitable to use here without adjusting by
401 * reset, so do not try to disable them. Use is_initialized
402 * to determine if this is the initial HW reset.
440 * immediately since there is no need to kick off another
463 /* Convert data to host endian */
467 * Use MDTS to ensure our default max_xfer_size doesn't exceed what the
687 * don't pass log page data to the consumers. In practice, this case
694 /* Convert data to host endian */
760 * Repost another asynchronous event request to replace the one
775 * to replace the one just failed, only to fail again and
797 /* Wait to notify consumers until after log page is fetched. */
803 * Repost another asynchronous event request to replace the one
859 /* aerl is a zero-based value, so we need to add 1 here. */
925 /* Limit HMB to 5% of RAM size per device by default. */
967 nvme_printf(ctrlr, "failed to alloc HMB\n");
975 nvme_printf(ctrlr, "failed to load HMB\n");
1007 nvme_printf(ctrlr, "failed to alloc HMB desc\n");
1016 nvme_printf(ctrlr, "failed to load HMB desc\n");
1070 * we have already submitted admin commands to get
1098 * including using NVMe SET_FEATURES/NUMBER_OF_QUEUES to determine the
1100 * after any reset for controllers that depend on the driver to
1112 panic("num_io_queues changed from %u to %u",
1280 /* Assume user space already converted to little-endian */
1364 uint32_t to, vs, pmrcap;
1424 to = NVME_CAP_LO_TO(cap_lo) + 1;
1425 ctrlr->ready_timeout_in_ms = to * 500;
1455 * failed up the stack. The fail_req task needs to be able to run in
1456 * this case to finish the request failure for some cases.
1459 * queue before proceding to free the sim, though nothing would stop
1533 * Notify the controller of a shutdown, even though this is due to
1634 int to = hz;
1645 * that may have been started to complete. The reset process we follow
1646 * will ensure that any new I/O will queue and be given to the hardware
1649 while (atomic_cmpset_32(&ctrlr->is_resetting, 0, 1) == 0 && to-- > 0)
1651 if (to <= 0) {
1661 * Per Section 7.6.2 of NVMe spec 1.4, to properly suspend, we need to
1665 * incriminating. Once we delete the qpairs, we have to disable them
1680 * Can't touch failed controllers, so nothing to do to resume.
1700 * the controller. However, we have to return success for the resume
1701 * itself, due to questionable APIs.
1703 nvme_printf(ctrlr, "Failed to reset on resume, failing.\n");