Timing itself costs time. Thus, the stressfull timeout phase of the
test is not exactly as long as set but a little bit longer. This is why the
fast timeouts are able to trigger more often than they are expected to
(the timer has a static timeout-rate limit). Normally we consider this effect
through an error tolerance of 10%. But at least on foc x86_32 (PIT with very
low max timeout), timing is so expensive that 10% is not enough. We have to
raise it to 11%.
No starvation of timeout signals
--------------------------------
Add several timeouts < 1ms to the stress test and check that timeout
handling doesn't become significantly unfair (starvation) in this situation
where some timeouts trigger nmuch faster than they get handled.
Rate limiting for timeout handling in timer
-------------------------------------------
Ensure that the timer does not handle timeouts again within 1000
microseconds after the last handling of timeouts. This makes denial of
service attacks harder. This commit does not limit the rate of timeout
signals handled inside the timer but it causes the timer to do it less
often. If a client continuously installs a very small timeout at the
timer it still causes a signal to be submitted to the timer each time
and some extra CPU time to be spent in the internal handling method. But
only every 1000 microseconds this internal handling causes user timeouts
to trigger.
If we would want to limit also the call of the internal handling method
to ensure that CPU time is spent beside the RPCs only every 1000
microseconds, things would get more complex. For instance, on NOVA
Time_source::schedule_timeout(0) must be called each time a new timeout
gets installed and becomes head of the scheduling queue. We cannot
simply overwrite the already running timeout with the new one.
Ref #2490
Create periodic and one-shot timeouts with the maximum duration
to see if triggers any corner-case bugs. They must not trigger during
the test.
Ref #2490
Add a "writeable" policy option to the ahci_drv and part_blk Block
servers and default from writeable to ready-only. Should a policy
permit write acesss the session request argument "writeable" may still
downgrade a session to ready-only.
Fix#2469
The VFS library can be used in single-threaded or multi-threaded
environments and depending on that, signals are handled by the same thread
which uses the VFS library or possibly by a different thread. If a VFS
plugin needs to block to wait for a signal, there is currently no way
which works reliably in both environments.
For this reason, this commit makes the interface of the VFS library
nonblocking, similar to the File_system session interface.
The most important changes are:
- Directories are created and opened with the 'opendir()' function and the
directory entries are read with the recently introduced 'queue_read()'
and 'complete_read()' functions.
- Symbolic links are created and opened with the 'openlink()' function and
the link target is read with the 'queue_read()' and 'complete_read()'
functions and written with the 'write()' function.
- The 'write()' function does not wait for signals anymore. This can have
the effect that data written by a VFS library user has not been
processed by a file system server yet when the library user asks for the
size of the file or closes it (both done with RPC functions at the file
system server). For this reason, a user of the VFS library should
request synchronization before calling 'stat()' or 'close()'. To make
sure that a file system server has processed all write request packets
which a client submitted before the synchronization request,
synchronization is now requested at the file system server with a
synchronization packet instead of an RPC function. Because of this
change, the synchronization interface of the VFS library is now split
into 'queue_sync()' and 'complete_sync()' functions.
Fixes#2399
The run script did not consider the routing for the environment ROM
sessions for the test-iso component. It routed all ROM sessions -
including the ones for the executable and the dynamic linker - to
fs_rom. The patch also adds the cap quota definitions required since
version 17.05 and fixes a whitespace inconsistency between the test
program and the run script.
Thanks to Steven Harp for reporting!
The new version of the test exercises the combination of fs_report with
ram_fs and fs_rom as a more flexible alternative to report_rom.
It covers two corner cases that remained unaddressed by fs_rom and
ram_fs so far: First, the late installation of a ROM-update signal
handler at fs_rom right before the content of the file is modified.
Second, the case where the requested file is not present on the file
system at the creation time of the ROM session. Here, the ram_fs missed
to inform listeners for the compound directory about the later created
file.
On platforms that use the PIT timer driver, 'elapsed_ms' is pretty
inprecise/unsteady (up to 3 ms deviation) for a reason that is not
clearly determined yet. On Fiasco and Fiasco.OC, that use kernel timing,
it is the same. So, on these platforms, our locally interpolated time
seems to be fine but the reference time is bad. Until this is fixed, we
raise the error tolerance for these platforms in the run script.
Ref #2400
The result-buffer related members of the fast polling test are
the same for each buffered result type. Thus, we can make the
code easier by providing them through a struct.
Ref #2400
On QEMU, NOVA uses the pretty unstable TSC emulation as primary time
source. Thus, timeouts do not trigger with the common precision (< 50
ms). Use an error tolerance of 200 ms for this platform constellation.
Ref #2400
The fast polling test uses one timer session for raw 'elapsed_ms' calls
and another one for potentially interpolated 'curr_time' calls. It then
compares the two results against each other. However, until now, the
test did not consider that the duration of the session construction may
create a remarkable shift between the local times of the two sessions.
This shift is now determined and compensated before doing any
comparison.
Ref #2400
The multiple-handlers test was checking if handlers at one signal were
activated in a fair manner. But on Qemu, the error tolerance of one was
too small in rare cases (2 of 100 runs). However, having multiple
handlers for the same signal context can be considered deprecated
anyway. With the recommended Signal_handler wrapper for signal sessions,
you can't use this feature. Thus, we removed the multiple-handlers test.
Fixes#2450
On platforms were we do not have local time interpolation we can simply
skip the first test stage in the timeout test. This way, we can at least
test the rest.
Fixes#2435
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes#2400
This patch reduces the number of exception types by facilitating
globally defined exceptions for common usage patterns shared by most
services. In particular, RPC functions that demand a session-resource
upgrade not longer reflect this condition via a session-specific
exception but via the 'Out_of_ram' or 'Out_of_caps' types.
Furthermore, the 'Parent::Service_denied', 'Parent::Unavailable',
'Root::Invalid_args', 'Root::Unavailable', 'Service::Invalid_args',
'Service::Unavailable', and 'Local_service::Factory::Denied' types have
been replaced by the single 'Service_denied' exception type defined in
'session/session.h'.
This consolidation eases the error handling (there are fewer exceptions
to handle), alleviates the need to convert exceptions along the
session-creation call chain, and avoids possible aliasing problems
(catching the wrong type with the same name but living in a different
scope).
This patch mirrors the accounting and trading scheme that Genode employs
for physical memory to the accounting of capability allocations.
Capability quotas must now be explicitly assigned to subsystems by
specifying a 'caps=<amount>' attribute to init's start nodes.
Analogously to RAM quotas, cap quotas can be traded between clients and
servers as part of the session protocol. The capability budget of each
component is maintained by the component's corresponding PD session at
core.
At the current stage, the accounting is applied to RPC capabilities,
signal-context capabilities, and dataspace capabilities. Capabilities
that are dynamically allocated via core's CPU and TRACE service are not
yet covered. Also, the capabilities allocated by resource multiplexers
outside of core (like nitpicker) must be accounted by the respective
servers, which is not covered yet.
If a component runs out of capabilities, core's PD service prints a
warning to the log. To observe the consumption of capabilities per
component in detail, the PD service is equipped with a diagnostic
mode, which can be enabled via the 'diag' attribute in the target
node of init's routing rules. E.g., the following route enables the
diagnostic mode for the PD session of the "timer" component:
<default-route>
<service name="PD" unscoped_label="timer">
<parent diag="yes"/>
</service>
...
</default-route>
For subsystems based on a sub-init instance, init can be configured
to report the capability-quota information of its subsystems by
adding the attribute 'child_caps="yes"' to init's '<report>'
config node. Init's own capability quota can be reported by adding
the attribute 'init_caps="yes"'.
Fixes#2398
This patch makes use of the new 'Quota_transfer::Account' by the service
types in base/service.h and uses 'Quota_transfer' objects in
base/child.cc and init/server.cc.
Furthermore, it decouples the notion of an 'Async_service' from
'Child_service'. Init's 'Routed_service' is no longer a 'Child_service'
but is based on the new 'Async_service' instead.
With this patch in place, quota transfers do no longer implicitly use
'Ram_session_client' objects. So transfers can in principle originate
from component-local 'Ram_session_component' objects, e.g., as used by
noux. Therefore, this patch removes a strumbling block for turning noux
into a single threaded component in the future.
Issue #2398
This patch replaces the former use of size_t with the use of the
'Ram_quota' type to improve type safety (in particular to avoid
accidentally mixing up RAM quotas with cap quotas).
Issue #2398
For asynchronously provided sessions, the parent has to maintain the
session state as long as the server hasn't explicitly responded to a
close request. For this reason, the lifetime of such session states is
bound to the server, not the client.
When the server responds to a close request, the session state gets
freed. The 'session_response' implementation does not immediately
destroy the session state but delegates the destruction to a client-side
callback, which thereby also notifies the client. However, the code did
not consider the case where the client has completely vanished at
session-response time. In this case, we need to drop the session state
immediately.
Fixes#2391
The base class of Registered must provide a virtual destructor to enable
safe deletion with just a base class pointer. This requirement can be
lifted by using Registered_no_delete in places where the deletion
property is not needed.
Fixes#2331
Ldso now does not automatically execute static constructors of the
binary and shared libraries the binary depends on. If static
construction is required (e.g., if a shared library with constructor is
used or a compilation unit contains global statics) the component needs
to execute the constructors explicitly in Component::construct() via
Genode::Env::exec_static_constructors().
In the case of libc components this is done by the libc startup code
(i.e., the Component::construct() implementation in the libc).
The loading of shared objects at runtime is not affected by this change
and constructors of those objects are executed immediately.
Fixes#2332
The test used to rely on init's formerly built-in policy of answering
resource requests with slack memory, if available. Since init no longer
responds to resource requests in an autonomous way, we use a dynamically
configured sub-init instance as runtime for the test. This instance, in
turn, is monitored and controlled such that resource requests are
result in quota upgrades. The monitoring component is implemented in
the same test-resource_request program as the test. Both roles are
distinguished by the "role" config attribute.
This is a follow-up to "init: explicit response to resource requests".
This patch improves init's dynamic reconfigurability with respect to
adjustments of the RAM quota assigned to the children.
If the RAM quota is decreased, init withdraws as much quota from the
child's RAM session as possible. If the child's RAM session does not
have enough available quota, a resource-yield request is issued to
the child. Cooparative children may respond to such a request by
releasing memory.
If the RAM quota is increased, the child's RAM session is upgraded.
If the configuration exceeds init's available RAM, init re-attempts
the upgrade whenever new slack memory becomes available (e.g., by
disappearing other children).
This patch enables init to apply changes of any server's <provides>
declarations in a differential way. Servers can in principle be extended
by new services without re-starting them. Of course, changes of the
<provides> declarations may affect clients or would-be clients as this
information is taken into account for the session routing.
Under certain timing conditions, the test would end up flushing the
input from the input filter in a nested way, which ultimately resulted
in lost input events of the outer nesting level. This patch eliminates
this corner case and thereby stabilizes the key-repeat test.