When running core as the kernel inside every component, a separate
stack area for core is needed that is different from the user-land
component's one.
Ref #2091
For most base platforms (except linux and sel4), the initialization of
boot modules is the same. Thus, merge this default implementation in the
new unit base/src/core/platform_rom_modules.cc.
Ref #2490
The recently implemented capability resource trading scheme unfortunately
broke the automated capability memory upgrade mechanism needed by base-hw
kernel/core. This commit splits the capability memory upgrade mechanism
from the PD session ram_quota upgrade, and moves that functionality
into a separate Pd_session::Native_pd interface.
Ref #2398
On ARM, we do not have a component-local hardware time-source. The ARM
performance counter has no reliable frequency as the ARM idle command
halts the counter. Thus, we do not do local time interpolation on ARM.
Except we're on the HW kernel. In this case we can read out the kernel
time instead.
Ref #2435
By separating the session-interface concerns from the mechanics of the
dataspace creation, the code becomes simpler to follow, and the RAM
session can be more easily merged with the PD session in a subsequent
step.
Issue #2407
This patch allows core's 'Signal_transmitter' implementation to sidestep
the 'Env::Pd' interface and thereby adhere to a stricter layering within
core. The 'Signal_transmitter' now uses - on kernels that depend on it -
a dedicated (and fairly freestanding) RPC proxy mechanism for signal
deliver, instead of channeling signals through the 'Pd_session::submit'
RPC function.
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes#2400
Removes the following Fiasco.OC specific features:
* GDB extensions for Fiasco.OC
* i.MX53 support for Fiasco.OC
* Kernel debugger terminal driver
* Obsolete interface Native_pd
* Obsolete function of interface Native_cpu
This patch reduces the number of exception types by facilitating
globally defined exceptions for common usage patterns shared by most
services. In particular, RPC functions that demand a session-resource
upgrade not longer reflect this condition via a session-specific
exception but via the 'Out_of_ram' or 'Out_of_caps' types.
Furthermore, the 'Parent::Service_denied', 'Parent::Unavailable',
'Root::Invalid_args', 'Root::Unavailable', 'Service::Invalid_args',
'Service::Unavailable', and 'Local_service::Factory::Denied' types have
been replaced by the single 'Service_denied' exception type defined in
'session/session.h'.
This consolidation eases the error handling (there are fewer exceptions
to handle), alleviates the need to convert exceptions along the
session-creation call chain, and avoids possible aliasing problems
(catching the wrong type with the same name but living in a different
scope).
This patch mirrors the accounting and trading scheme that Genode employs
for physical memory to the accounting of capability allocations.
Capability quotas must now be explicitly assigned to subsystems by
specifying a 'caps=<amount>' attribute to init's start nodes.
Analogously to RAM quotas, cap quotas can be traded between clients and
servers as part of the session protocol. The capability budget of each
component is maintained by the component's corresponding PD session at
core.
At the current stage, the accounting is applied to RPC capabilities,
signal-context capabilities, and dataspace capabilities. Capabilities
that are dynamically allocated via core's CPU and TRACE service are not
yet covered. Also, the capabilities allocated by resource multiplexers
outside of core (like nitpicker) must be accounted by the respective
servers, which is not covered yet.
If a component runs out of capabilities, core's PD service prints a
warning to the log. To observe the consumption of capabilities per
component in detail, the PD service is equipped with a diagnostic
mode, which can be enabled via the 'diag' attribute in the target
node of init's routing rules. E.g., the following route enables the
diagnostic mode for the PD session of the "timer" component:
<default-route>
<service name="PD" unscoped_label="timer">
<parent diag="yes"/>
</service>
...
</default-route>
For subsystems based on a sub-init instance, init can be configured
to report the capability-quota information of its subsystems by
adding the attribute 'child_caps="yes"' to init's '<report>'
config node. Init's own capability quota can be reported by adding
the attribute 'init_caps="yes"'.
Fixes#2398
By installing the core object to bin/, we follow the same convention as
for regular binaries. This, in turn, enables us to ship core in a
regular binary archive. The patch also adjusts the run tool to pick up
the core object from bin/ for the final linking stage.
This patch make the ABI mechanism available to shared libraries other
than Genode's dynamic linker. It thereby allows us to introduce
intermediate ABIs at the granularity of shared libraries. This is useful
for slow-moving ABIs such as the libc's interface but it will also
become handy for the package management.
To implement the feature, the build system had to be streamlined a bit.
In particular, archive dependencies and shared-lib dependencies are now
handled separately, and the global list of 'SHARED_LIBS' is no more.
Now, the variable with the same name holds the per-target list of shared
libraries used by the target.
This patch makes the benefit of the recently introduced unified Genode
ABI available to developers by enabling the use of multiple kernels from
within a single build directory. The create_builddir tool has gained a
new set of kernel-agnostic platform arguments such as x86_32, or panda.
Most build targets within directories are in principle compatible with
all kernels that support the selected hardware platform. To execute a
scenario via the run tool, one has to select the kernel to use by
setting the 'KERNEL' argument in the build configuration
(etc/build.conf). Alternatively, the 'KERNEL' can be specified as
command-line argument of the Genode build system, e.g.:
make run/log KERNEL=nova
This allows us to easily switch from one kernel to another without
rebuilding any Genode component except for the very few kernel-specific
ones.
The new version of the 'create_builddir' tool is still compatible with
the old version. The old kernel-specific build directories can still be
created. However, those variants will eventually be removed.
Note that the commit removes the 'ports-foc' repository from the
generated 'build.conf' files. As this is only meaningful for 'foc',
I did not want to include it in the list of regular repositories (as
visible in a 'x86_32' build directory). Hence, the repository must
now be manually added in order to use L4Linux.
Issue #2190