The argument is superfluous because only run/image/uboot evaluated it
anyway, and the argument is always boot/image.elf. With this change, the
official semantics of run_image become: "replace the boot/image.elf file
by platform-specific file(s) at boot/ that can actually be booted".
Issue #4730
* Update links from forward rules only with forward rules and links from
transport-routing rules only with transport-routing rules. Besides raising
the performance of the code, this also fixes a former bug that allowed
forward-rule links to falsely stay active because of a transport-routing
rule that matched the client destination ip and port.
* Don't use good-case exceptions for updating TCP/UDP links on re-configuration
of the router.
* Make conditions when to dismiss a forward rule easier to read.
* Introduces != operator to the public Port class in the net library.
* Fix unnecessary log message that a link was dismissed when only a potentially
matching forward rule turned out to be not matching.
* Apply Genode coding style to if statements with a single body statement.
Fix#4728
This fixes a bug that was introduced by this earlier commit:
"nic_router: find forward rules w/o exceptions"
The NIC router used to falsely dissolve TCP/UDP connection states when
reconfiguring although the connection states were still legal according to the
new config. The reason was that the above mention commit nested lambdas but
missed to return from the last nesting level when having found a configuration
that legitimates the connection state.
Ref #4728
The semantic of .NOPARALLEL has changed in GNU Make 4.4
Quote:
New feature: .NOTPARALLEL accepts prerequisites If the .NOTPARALLEL
special target has prerequisites then all prerequisites of those targets
will be run serially (as if .WAIT was specified between each
prerequisite).
This means that only prerequisites are made sequential. Before
everything within a Makefile would be done in sequential order.
Therefore, we had to add the *.hash target (appears multiple times) to
the .NOPARALLEL prerequisites.
issue #4725
Tests on qemu would fail when started with RAM sizes from 1025MiB to
2048MiB, because the the mapping hole in the page table from 1GiB to
2GiB would interfere with qemu's mapping addresses for ACPI.
Identity-map the complete first 4GiB of memory to catch all early
memory accesses during bootstrap.
Fixes#4724.
This patch simplifies the 'Deploy::update_managed_deploy_config'
interface by keeping an internal copy of the currently used deploy
template inside the 'Deploy' class. The template is updated whenever
the config/deploy file is modified.
This change weakens the coupling between the '_manual_deploy_rom' and
the '_deploy' subsystem, easing the upcoming implementation of the
switching between presets.
Adds befriended test-local wrappers for the classes Cpu_share and Cpu_scheduler
and adds a print method to the scheduler wrapper that prints the internal state
of the scheduler to the given output. Cpu_shares are referenced in the output
via a the IDs that the test uses to organize them. I.e., this corresponds to
how the CPU shares are named when calling the atomic steps the test is made of.
Ref #4151
Ref #4710
This adapts the test to the changes that were applied to the scheduling scheme
by the following commits:
* base-hw scheduler: optimize quota depletion events
* base-hw scheduler: fix bug on removing head
* base-hw scheduler: fix ready method
* base-hw: optimize & cleanup scheduler
Part of that is that the test used to check whether the act of setting a share
ready outdates the head or not. However, with the current version of the
scheduler, this check is not possible anymore. We can merely check whether the
head is outdated after setting the share ready. So, among other adaptions, this
commit adapts the expectations of the test to the new semantics of the check.
Ref #4151
Ref #4710
* Get rid of preprocessor macros.
* Introduce Main as class.
* Exit with -1 instead of endless loops on errors.
* Don't try to deal with error conditions, just print a message and exit
with -1.
* Only one operation per line.
Ref #4151
Ref #4710
This is an optimization for the case that a prioritized scheduling context
needs slightly more time during a round than granted via quota. If this is the
case, we move the scheduling context to the front of the unprioritized schedule
once its quota gets depleted and thereby at least ensure that it does not have
to wait for all unprioritized scheduling contexts as well before being
scheduled again.
Note that this introduces the possibility of undeserved starvation of
unprioritized scheduling contexts to the scheduling scheme. If there are
enough prioritized contexts that deplete their quota during a round,
they may cover up also the rest of the round with their unprioritized time
slices. If this happens every round, contexts without a priority/quota may
never get a turn. In the previous scheduling scheme, this could not occur as
the unprioritized schedule was completely independent from prioritized
schedules and rounds.
Ref #4151
Ref #4710
The scheduler did not consider the consumed quota during a call to "update"
if the head that consumed the quota was removed from the scheduler. When this
occured, the internal round time did not advance as expected but remained at
its previous value untile the next call to "update" (without a removed head)
This commit introduces a new flag that is set only when the head gets removed
in order to detect and handle the situation correctly on the next call to
"update".
Ref #4151
Ref #4710
Setting the _need_to_schedule member in the 'ready' method of the scheduler
was not done correctly. At least, the _need_to_schedule was set true in
situations were the head was not outdated by the 'ready' operation.
Ref #4151
* Remove *request* in context of: wait, reply, send to shorten it.
* Use ready_to_* instead of can_*, which is regularily used in Genode's APIs
* Replace helping_sink with helping_destination, as destination is more common
Ref genodelabs/genode#4704
The IPC protcol violations are:
* Sending to an unknown thread (cap)
* Waiting for messages if a reply hasn't happened yet
This silents threads that otherwise repeatedly cause kernel messages
about the violation.
Ref genodelabs/genode#4704
* Split the internal state into incoming and outgoing message relations
* Avoid fragmenting of one state like formerly '_state' and '_help'
* Remove pointer to caller, use incoming FIFO instead
This commit fixes at least two bugs that were triggered by tests that
destroy threads in many different states, like run/bomb:
* The '_help' data member was not reset reliable in each situation where a
helping relationship came to an end. However, when we fixed this bug alone
in the old state model, the issues remained. The new state model fixes
this bug as well.
* A thread sometimes referenced an already dead thread as receiver. This caused
the kernel IPC code to access the vtable of an object that didn't exist any
longer. Note that the two threads were not in direct IPC relationship while
the receiver was destroyed, so, there must have been an intermediate node
between them. Due to the complexity of this problem, we eventually gave up
pin-pointing the exact reason in the kernel IPC code. The issue disappeared
with the new state model.
Fixgenodelabs/genode#4704
When writing the GPT header, the tool always wrote the GPT entries
belonging to the primary header to LBA following the header. Normally
this is LBA 2 as the header is located in LBA 1. The GPT allows for
up to 128 entries that all in all cover 16 KiB of storage space.
However, on some systems, e.g. ARM-based machines, the bootloader can
be stored in this region. For this reason the GPT entries may be moved
to a different LBA.
This commit changes the tool to adhere to then given GPE LBA in header
when writing out the modified GPT data.
Fixes#4720.
The old 'Io_response_handler::io_progress_response' interface has been
replaced by the 'Vfs::Env::User::wakeup_vfs_user' (issue #4697). The
remaining 'read_ready_response' method is now hosted in the
appropriately named 'Read_ready_response_handler'.
Issue #4706
This patch keeps driving the internal state machines until no progress
can be made. This required fixing the return values of several execute
functions, which used to report progress while being in complete state.
Along the way, the patch removes default switch cases to ensure that all
states are covered.
Issue #4706
This commit supplements the various I/O signal handlers of the VFS
plugins with calls of the new 'Vfs::Env::User::wakeup_vfs_user'
interface, which will subsequently replace the old 'Io_progress_handler'
(issue #4697).
Issue #4706
The 'blocked_handles' queue was used to notify the VFS user via the
'io_progress_response' mechanism. This is now covered by the
'wakeup_vfs_user' interface introduced in issue #4697.
Issue #4706
Information about PS/2 and PIT where moved to app/pci_decode in the
following commit.
pci_decode: report devices from ACPI info
We still provide an empty <devices> node as the file itself is used by
platform agnostic run scripts.
When running on x86, and riscv never enter the kernel for cache maintainance,
but use the dummy implementation of the generic base library instead.
On ARMv8 it is not necessary to enter privileged mode for cache cleaning, and
unification of instruction/data cache, but only for invalidating cache lines
at all levels, which is necessary for the use cases, where this function it
needed (coherency of DMA memory).
Fixgenodelabs/genode#4339
This call is used to query the cache line size of the underlying CPU.
For now it is only implemented and used by 'arm_v8' platforms.
It does not distinguish between D-/I-cache sizes and always uses the
smallest size. Furthermore it does not account for any discrepancy
in 'big.little' CPUs.
Issue #4339.
To prevent the kernel to deadlock, or call itself with a syscall when
using a lock potentially hold by a core thread, the log console's
backend for core (hw) gets replaced by a specific variant that checks
whether it runs in the kernel context before using the mutex.
Fixgenodelabs/genode#3280