While this approach still scans "holes" in the bus range, it stops
scanning at the maximum subordinate bus number reachable from the base
PCI bus at the host bridge. Startup under Qemu no longer takes about 12
seconds for scanning 256 buses.
This commit introduces support for the HMB feature and will setup the
buffer during start-up. The host-memory-buffer (HMB) feature is mostly
used on NVMe devices that do not make use of an DRAM cache to store its
translation tables amongst other operational data. Not using HMB can
impair the performance on such devices.
The memory is allocated in 2 MiB chunks of DMA-capable memory and its
total size in bytes is configurable via the 'hmb_size' config attribute.
The driver always checks the minimal and preferred size of the HMB and
issues a warning in case it is not enabled via the configuration.
Moreover, if the configured size is less than the minimal amount
required by the device the HMB is not configured at all and a warning
is issued also. If the configured size is more than the preferred size
it will be capped to that amount.
Fixes#4715.
As a result of the API change the memory handling could be simplified.
Since the Block session dataspace is now directly used for DMA, we
actually only have to provide the memory for setting up PRP lists for
large requests (for the moment more than 8 KiB of data).
As we limit the maximum data transfer length to 2 MiB, we get by with
just a page per request. Those memory is allocated beforehand for the
maximum number of I/O requests, which got bumbed to 512 entries. Since
not all NVMe controllers support such large a maximum data transfer
length and this many entries, especially older ones, the values are
capped according to the properties of the controller during
initialization. (The memory demands of the component are around 3 MiB
due to setting up for the common case, even if a particular controller
is only able to make use of less.)
(Although there are controllers whose maximum memory page size is more
than 4K, the driver is hardcoded to solely use 4K pages.)
In addition to those changes, the driver now supports the 'SYNC' and
'TRIM' operations of the Block session by using the NVMe 'FLUSH' and
'WRITE_ZEROS' commands.
Fixes#3702.
Since the timer and timeout handling is part of the base library (the
dynamic linker), it belongs to the base repository.
Besides moving the timer and its related infrastructure (alarm, timeout
libs, tests) to the base repository, this patch also moves the timer
from the 'drivers' subdirectory directly to 'src' and disamibuates the
timer's build locations for the various kernels. Otherwise the different
timer implementations could interfere with each other when using one
build directory with multiple kernels.
Note that this patch changes the include paths for the former os/timer,
os/alarm.h, os/duration.h, and os/timed_semaphore.h to base/.
Issue #3101
Also remove 'requires_installation_of', while also checking sbin
directories in 'have_installed'. The run scripts have been adjusted
accordingly.
Fixes#2853
This driver component provides support for using consumer NVMe storage
devices, i.e. it omits name space managment and will always use the
first name space, on Genode. For now it defaults to a reasonable low
configuration:
- 1 I/O queue (completion/submission tuple)
- 128 entries in the I/O queue
- 4096 as the only I/O transaction memory page size
Fixes#2747.