The calibration of the interpolation parameters was previously only done
periodically every 500 ms. Together with the fact that the parameters
had to be stable for at least 3 calibration steps to enable
interpolation, it took at least 1.5 seconds after establishing a
connection to get microseconds-precise time values.
This is a problem for some drivers that directly start to poll time.
Thus, the timer connection now does a calibration burst as soon as it
switches to the modern mode (the mode with microseconds precision).
During this phase it does several (currently 9) calibration steps
without a delay inbetween. It is assumed that this is fast enough to not
get interrupted by scheduling. Thus, despite being small, the measured
values should be very stable which is why the burst should in most cases
be sufficient to get the interpolation initialized.
Ref #2400
When in modern mode (with local time interpolation), the timer
connection used to maximize the left shifting of its
timestamp-to-microseconds factor. The higher the shift the more precise
is the translation from timestamps to microseconds. If the timestamp
values used for determining the best shift were small - i.e. the delay
between the calibration steps were small - we may got a pretty big
shift. If we then used the shift with bigger timestamp values - i.e.
called curr_time seldom or raised calibration delays - the big shift
value became a problem. The framework had to scale down all measured
timestamps and time values temporarily to stay operative until the next
calibration step.
Thus, we now raise the shift only that much that the resulting factor
fullfills a given minimum. This keeps it as low as possible according
to the precision requirement. Currently, this requirement is set to 8
meaning that the shifted factor shall be at least 2^8 = 256.
Ref #2400
As the timer session now provides a method 'elapsed_us', there is no more need
for doing any internal calculations with values of milliseconds.
Ref #2400
As timer sessions are not expected to be microseconds precise (because
of RPC latency and scheduling), the session interface provided only a
method 'elapsed_ms' although the back end of this method in the timer
driver works with microseconds.
However, in some cases it makes sense to have a method 'elapsed_us'. The
values it returns might be milliseconds away from the "real" time but it
allows you to work with delays smaller than a millisecond without
getting a zero delta value.
This commit is motivated by the need for fast bursts of calibration
steps for the time interpolation in the new timer connection.
Ref #2400
In the timeout framework, we maintain a translation factor value to
translate between time and timestamps. To raise precision we scale-up
the factor when we calculate it and scale-down the result of its
appliance later again. This up and down scaling is achieved through
left and right shifting. Until now, the shift width was statically
choosen. However, some platforms need a big shift width and others a
smaller one. The one static shift width couldn't cover all platforms
which caused overflows or precision problems.
Now, the shift width is choosen optimally for the actual translation
factor each time it gets re-calculated. This way, we can take care that
the shift always renders the best precision level without the risk for
overflows.
Ref #2400
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes#2400
This patch improves the accounting for the backing store of
session-state meta data. Originally, the session state used to be
allocated by a child-local heap partition fed from the child's RAM
session. However, whereas this approach was somehow practical from a
runtime's (parent's) point of view, the child component could not count
on the quota in its own RAM session. I.e., if the Child::heap grew at
the parent side, the child's RAM session would magically diminish. This
caused two problems. First, it violates assumptions of components like
init that carefully manage their RAM resources (and giving most of them
away their children). Second, if a child transfers most of its RAM
session quota to another RAM session (like init does), the child's RAM
session may actually not allow the parent's heap to grow, which is a
very difficult error condition to deal with.
In the new version, there is no Child::heap anymore. Instead, session
states are allocated from the runtime's RAM session. In order to let
children pay for these costs, the parent withdraws the local session
costs from the session quota donated from the child when the child
initiates a new session. Hence, in principle, all components on the
route of the session request take a small bite from the session quota to
pay for their local book keeping
Consequently, the session quota that ends up at the server may become
depleted more or less, depending on the route. In the case where the
remaining quota is insufficient for the server, the server responds with
'QUOTA_EXCEEDED'. Since this behavior must generally be expected, this
patch equips the client-side 'Env::session' implementation with the
ability to re-issue session requests with successively growing quota
donations.
For several of core's services (ROM, IO_MEM, IRQ), the default session
quota has now increased by 2 KiB, which should suffice for session
requests to up to 3 hops as is the common case for most run scripts. For
longer routes, the retry mechanism as described above comes into effect.
For the time being, we give a warning whenever the server-side quota
check triggers the retry mechanism. The warning may eventually be
removed at a later stage.
This patch enables warnings if one of the deprecate functions that rely
in the implicit use of the global Genode::env() accessor are called.
For the time being, some places within the base framework continue
to rely on the global function while omitting the warning by calling
'env_deprecated' instead of 'env'.
Issue #1987
This patch supplements each existing connection type with an new
constructor that is meant to replace the original one. The new
one takes a reference to the component's environment as argument and
thereby does not rely on the presence of the globally accessible
'env()' interface.
The original constructors are marked as deprecated. Once we have
completely abolished the use of the global 'env()', we will remove them.
Fixes#1960
This patch changes the top-level directory layout as a preparatory
step for improving the tools for managing 3rd-party source codes.
The rationale is described in the issue referenced below.
Issue #1082