The re-keying state machine in the VBD module would use block data of the wrong
block for the hash update of an inner node in a certain circumstance.
On re-keying, the VBD iterates for a given VBA over all snapshots, beginning
with the newest and re-keys the VBA in each of the snapshots. At each snapshot
it therefore loads the branch of the VBA top-down, and then updates the branch
bottom-up. However, if loading a certain level of the branch of a certain
snapshot runs into the same physical block as with the last snapshot on this
level, the algorithm turns around and updates the branch from this point
upwards instead of going further down the whole way to the leaf. This is
because everything below this point has already been re-keyed in the course of
a newer snapshot.
The case where this turning around is not right above the leaf (i.e., the first
shared physical block is a metadata block) that's were the bug was located. In
this situation, we have to re-encode the highest shared metadata block into a
buffer again before starting to update. The update code acts as if the
mentioned block was just written back (which is true when going down all the
way to the leaf before updating) and consequently is present in the encoded
buffer.
Ref #4971
Until now, it was possible to use bad Free-Tree/VBD configurations with the
<initialize/> command. The tresor tester didn't complaining about it but the
tresor lib crashed or, worse, corrupted the tresor container. Now, the tresor
tester checks things, like for instance, that "nr_of_children" must be a power
of 2.
Ref #4971
The Superblock Control module now issues a snapshot garbage collection on each
incoming request. In return for that, the commit removes all calls to the
garbage collection from other modules.
Ref #4971
The Virtual Block Device module used to create a local copy of the Snapshots
array respectively Snapshot root it received with an incoming request. After
finishing the VBD operation on the copy, the source module of the request
used to back-copy the resulting Snapshot array resp. Snapshot root. This is
not only less efficient than referencing but also allowed a bug to sneak into
the new C++ implementation.
In contrast to the old Ada/SPARK implementation (CBE), the new design doesn't
allow for global objects that can be accessed by any module without receiving a
reference in a module request. Therefore, the Free Tree module has to receive a
reference to a Snapshots array with each request in order to be able to use it.
In our case, these requests are allocations for a "Write" operation from the
VBD. However, the VBD itself receives only the one Snapshot required for
writing and therefore causes the Free Tree to make bad decisions on whether or
not a block can be re-allocated or not.
With this commit, the VBD always receive a reference to the whole Snapshots
array and also propagates it this way to the Free Tree.
Ref #4971
This is function gets called by some libssh applications using vms_lxip.
For the dummy implementation I looked at the old port.
Issue genodelabs#5161
Issue gapfruit#1976
- always assign apps/overlay to targets (visible=true/false) to
prevent 0x0 geometry, which is interpreted as close
- add QMenu as exampel to panel button
- use usb-tablet on Qemu
Per default, windows assigned to targets are visible, which can be
changed with the new boolean "visible" attribute. Thus, window can be
hidden without changing their geometry.
Before, the current back-most window was not restacked if it was part of
the already, which lead to partially inconsistent view of the window
stack between decorator and nitpicker.
The added hook 'OBJ_POSTPROC_SRC' gives us a way to post-process object
files for generating supplemental code. By using this hook, the
initcall_table.c generated by import-lx_emul_common.inc gets reliably
executed after all object files are built.
Fixes#5159
The option is used during the generation of initcall_table.c.
However, it happens to strip the first argument following the option.
The long option --defined-only works as expected.
Issue #5155
Due to a bug in the original implementation, the size of the MMIO
range covering the 'Request_sense_response' data was set too large
during the MMIO boundary change. This rendered devices that were not
yet ready and required an 'Request_sense' command unusable.
The commit also adapts all other commands where the MMIO size does
not match the expected one.
Fixes#5133.
The commit adds support to throttle the rate of the RX IRQs to a specified
value. The effect is, that no RX IRQs below the time threshold will fire and
therefore the CPU load gets reduced on the host. Trade-off gaming between
cpu load, throughput, overload.
Modular Sculpt 23.10 on S938 as testcase. In brackets the CPU affinity is
denoted.
ipxe (0,0) -> nic_router (1,0) -> Debian VM vbox6 (3,0) and (3,1)
VM: iperf -C X.X.X.X -t 60 -R
iperf server X.X.X.X is outside Sculpt and sends data due to '-R' to VM
Non representative measure points:
cpu load - ipxe - nic_router - iperf throughput
--------------------------------------------------
w/o patch - ~80% - ~50% - ~706 MBit/s - 0 -> throttling off by default on S938
patch 651 - ~20% - ~35% - ~763 MBit/s - 651 -> 0.166ms throttle RX IRQ
patch 5580 - ~15% - ~25% - ~650 MBit/s - 5580 -> 1.4ms throttle RX IRQ
Issue #5149
A bunch of transmit requests received by the Uplink server (nic_router)
are currently added one by one to the ring buffer and every time the hardware
is notified to process each single request.
Instead, add as many as possible transmit requests in the ring buffer of
the hardware and when done trigger the hardware to process the ring.
Additionally, don't receive an "processed" TX IRQ for each element in the
ring, which causes high CPU load.
With this commit the TX IRQs in the ipxe driver for a
iperf -c X.X.X.X -t 60
from within a VM to the outside iperf server is reduced from about
~2'600'000 IRQs to about ~200'000. The overall CPU load for the driver
(when executed alone on CPU 0) is reduced from ~85 percent load to ~45 percent
load.
Issue #5149
during receive the nic_ep may block as long as the guest does not provide
another receive network descriptor. In the meantime, all Genode signals
regarding the network interface, e.g. tx, will be postponed, which may
effect the throughput.
Instead use the nic_ep for rx packets unblocking. Add an notification mechanism
to the e1000 vbox network model, to notify us as soon as the guest added new
receive descriptors in the model.
Issue #5146
For pbxa9, Qemu is started with only 256 MiB for foc but with 768 MiB
for base-hw. By reducing the RAM quota for all start nodes within the
remote scenario, each component gets enough RAM quota to breathe.
When wrongly invoking the run script by specifying a skipped test
as its only TEST_PKGS argument, the run script fails due to a wrong
tar argument order. Let's better reflect this condition to the user
ahead of invoking tar.
With `MAP_FIXED` absent from the mmap(3p) flags, "the implementation uses
addr in an implementation-defined manner to arrive at pa", which may
lead to a mapping at an address diffent to the requested `addr`.
Add `MAP_FIXED` to the mmmap flags to force mapping to the specified
address.
Fixes#5147
Such messages can occur by chance when killing 'echo' while the program
blocks in an IPC call. It gets killed nevertheless. So the message does
not hint at a failure of the test.
In the context of #5138, the timer drivers for NOVA and base-hw had been
changed to support timeouts at a precision of 250 us (from formerly 1 ms).
Adjust the test to the new expected lower bound.
The dynamic buffer allocation increases the RAM demand slightly beyond
1M on seL4. Use 2M, as is already the default in pkg/terminal_crosslink.
Issue #5135
Replace the USB session API by one that provides a devices ROM only,
which contains information about all USB devices available for this client,
as well as methods to acquire and release a single device.
The acquisition of an USB device returns the capability to a device session
that includes a packet stream buffer to communicate control transfers
in between the client and the USB host controller driver. Moreover,
additional methods to acquire and release an USB interface can be used.
The acquisition of an USB interface returns the capability to an interface
session that includes a packet stream buffer to communicate either
bulk, interrupt, or isochronous transfers in between the client and the
USB host controller driver.
This commit implements the API changes in behalf of the Genode C API's
USB server and client side. Addtionally, it provides Usb::Device,
Usb::Interface, and Usb::Endpoint utilities that can be used by native
C++ clients to use the new API and hide the sophisticated packet stream API.
The adaptations necessary target the following areas:
* lx_emul layer for USB host and client side
* Linux USB host controller driver port for PC
* Linux USB client ports: usb_hid_drv and usb_net_drv, additionally
reduce the Linux tasks used inside these drivers
* Native usb_block_drv
* black_hole component
* Port of libusb, including smartcard and usb_webcam driver depending on it
* Port of Qemu XHCI model library, including vbox5 & vbox6 depending on it
* Adapt all run-scripts and drivers_interactive recipes to work
with the new policy rules of the USB host controller driver
Fixgenodelabs/genode#5021
For now this import file is solely there to satisfy the mechansim
in Goa that collects and incorporates import files for used APIs.
Issue genodelabs/goa#81.
The kernel timer used to truncated timeouts to the next lower
millisecond, which not only limits the wakeup accuracy but also results
in situations where a user-level timeout is triggered earlier than
expected. The latter effect results in the observation of a spurious
timeouts and the subsequent programming of another timeout.
The patch solves the problem by preserving the sub-milliseconds bits
in the 'us_to_ticks' implementation(s).
Issue #5142