2011-12-22 15:19:25 +00:00
|
|
|
/*
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
* \brief Connection to timer service and timeout scheduler
|
2011-12-22 15:19:25 +00:00
|
|
|
* \author Norman Feske
|
|
|
|
* \date 2008-08-22
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2017-02-20 12:23:52 +00:00
|
|
|
* Copyright (C) 2008-2017 Genode Labs GmbH
|
2011-12-22 15:19:25 +00:00
|
|
|
*
|
|
|
|
* This file is part of the Genode OS framework, which is distributed
|
2017-02-20 12:23:52 +00:00
|
|
|
* under the terms of the GNU Affero General Public License version 3.
|
2011-12-22 15:19:25 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _INCLUDE__TIMER_SESSION__CONNECTION_H_
|
|
|
|
#define _INCLUDE__TIMER_SESSION__CONNECTION_H_
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/* Genode includes */
|
2011-12-22 15:19:25 +00:00
|
|
|
#include <timer_session/client.h>
|
|
|
|
#include <base/connection.h>
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
#include <util/reconstructible.h>
|
|
|
|
#include <base/entrypoint.h>
|
|
|
|
#include <timer/timeout.h>
|
|
|
|
#include <trace/timestamp.h>
|
2011-12-22 15:19:25 +00:00
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
namespace Timer
|
|
|
|
{
|
|
|
|
class Connection;
|
|
|
|
template <typename> class Periodic_timeout;
|
|
|
|
template <typename> class One_shot_timeout;
|
|
|
|
}
|
2015-03-04 20:12:14 +00:00
|
|
|
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/**
|
|
|
|
* Periodic timeout that is linked to a custom handler, scheduled when constructed
|
|
|
|
*/
|
|
|
|
template <typename HANDLER>
|
2020-09-11 13:04:40 +00:00
|
|
|
struct Timer::Periodic_timeout : private Genode::Noncopyable,
|
|
|
|
private Genode::Timeout_handler
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
|
|
|
private:
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
using Duration = Genode::Duration;
|
|
|
|
using Timeout = Genode::Timeout;
|
|
|
|
using Microseconds = Genode::Microseconds;
|
|
|
|
|
|
|
|
typedef void (HANDLER::*Handler_method)(Duration);
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
Timeout _timeout;
|
|
|
|
HANDLER &_object;
|
|
|
|
Handler_method const _method;
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
/*********************
|
|
|
|
** Timeout_handler **
|
|
|
|
*********************/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
void handle_timeout(Duration curr_time) override {
|
|
|
|
(_object.*_method)(curr_time); }
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
public:
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
Periodic_timeout(Connection &timer,
|
|
|
|
HANDLER &object,
|
|
|
|
Handler_method method,
|
|
|
|
Microseconds duration)
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
:
|
2020-09-11 13:04:40 +00:00
|
|
|
_timeout { timer },
|
|
|
|
_object { object },
|
|
|
|
_method { method }
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
{
|
2020-09-11 13:04:40 +00:00
|
|
|
_timeout.schedule_periodic(duration, *this);
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* One-shot timeout that is linked to a custom handler, scheduled manually
|
|
|
|
*/
|
|
|
|
template <typename HANDLER>
|
2020-09-11 13:04:40 +00:00
|
|
|
class Timer::One_shot_timeout : private Genode::Noncopyable,
|
|
|
|
private Genode::Timeout_handler
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
{
|
|
|
|
private:
|
|
|
|
|
|
|
|
using Duration = Genode::Duration;
|
|
|
|
using Timeout = Genode::Timeout;
|
|
|
|
using Microseconds = Genode::Microseconds;
|
|
|
|
|
|
|
|
typedef void (HANDLER::*Handler_method)(Duration);
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
Timeout _timeout;
|
|
|
|
HANDLER &_object;
|
|
|
|
Handler_method const _method;
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
/*********************
|
|
|
|
** Timeout_handler **
|
|
|
|
*********************/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
void handle_timeout(Duration curr_time) override {
|
|
|
|
(_object.*_method)(curr_time); }
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
public:
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
One_shot_timeout(Connection &timer,
|
|
|
|
HANDLER &object,
|
|
|
|
Handler_method method)
|
|
|
|
:
|
|
|
|
_timeout { timer },
|
|
|
|
_object { object },
|
|
|
|
_method { method }
|
|
|
|
{ }
|
2020-07-24 11:12:02 +00:00
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
void schedule(Microseconds duration) {
|
2020-09-11 13:04:40 +00:00
|
|
|
_timeout.schedule_one_shot(duration, *this); }
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
void discard() { _timeout.discard(); }
|
|
|
|
|
|
|
|
bool scheduled() { return _timeout.scheduled(); }
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Connection to timer service and timeout scheduler
|
|
|
|
*
|
|
|
|
* Multiplexes a timer session amongst different timeouts.
|
|
|
|
*/
|
|
|
|
class Timer::Connection : public Genode::Connection<Session>,
|
|
|
|
public Session_client,
|
2020-09-11 13:04:40 +00:00
|
|
|
private Genode::Time_source
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
{
|
2020-09-11 13:04:40 +00:00
|
|
|
friend class Genode::Timeout;
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
private:
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
using Timeout = Genode::Timeout;
|
|
|
|
using Timeout_handler = Genode::Timeout_handler;
|
|
|
|
using Timestamp = Genode::Trace::Timestamp;
|
|
|
|
using Duration = Genode::Duration;
|
|
|
|
using Mutex = Genode::Mutex;
|
|
|
|
using Microseconds = Genode::Microseconds;
|
|
|
|
using Milliseconds = Genode::Milliseconds;
|
|
|
|
using Entrypoint = Genode::Entrypoint;
|
|
|
|
using Io_signal_handler = Genode::Io_signal_handler<Connection>;
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
/*
|
|
|
|
* Noncopyable
|
|
|
|
*/
|
|
|
|
Connection(Connection const &);
|
|
|
|
Connection &operator = (Connection const &);
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/*
|
|
|
|
* The mode determines which interface of the timer connection is
|
2020-09-11 13:04:40 +00:00
|
|
|
* enabled. Initially, a timer connection is in TIMER_SESSION mode.
|
|
|
|
* In this mode, the user can operate directly on the connection using
|
|
|
|
* the methods of the timer-session interface. As soon as the
|
|
|
|
* connection is handed over as argument to the constructor of a
|
|
|
|
* Periodic_timeout or a One_shot_timeout, it switches to
|
|
|
|
* TIMEOUT_FRAMEWORK mode. From this point on, the only method that
|
|
|
|
* the user can use directly on the connection is 'curr_time'. For
|
|
|
|
* the rest of the functionality he rather uses the interfaces of
|
|
|
|
* timeout objects that reference the connection.
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
*
|
2020-09-11 13:04:40 +00:00
|
|
|
* These are the characteristics of the two modes:
|
|
|
|
*
|
|
|
|
* TIMER_SESSION:
|
|
|
|
*
|
|
|
|
* * Allows for both blocking and non-blocking timeout semantics.
|
|
|
|
* * Missing local interpolation leads to less precise curr_time
|
|
|
|
* results.
|
|
|
|
* * Only one timeout at a time per connection.
|
|
|
|
*
|
|
|
|
* TIMEOUT FRAMEWORK:
|
|
|
|
*
|
|
|
|
* * Supports only non-blocking timeout semantics.
|
|
|
|
* * More precise curr_time results through local interpolation.
|
|
|
|
* * Multiplexing of multiple timeouts at the same connection
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
*/
|
2020-09-11 13:04:40 +00:00
|
|
|
enum Mode { TIMER_SESSION, TIMEOUT_FRAMEWORK };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
Mode _mode { TIMER_SESSION };
|
2020-02-19 15:26:40 +00:00
|
|
|
Mutex _mutex { };
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
Genode::Signal_receiver _sig_rec { };
|
|
|
|
Genode::Signal_context _default_sigh_ctx { };
|
2016-05-10 15:24:51 +00:00
|
|
|
|
|
|
|
Genode::Signal_context_capability
|
|
|
|
_default_sigh_cap = _sig_rec.manage(&_default_sigh_ctx);
|
|
|
|
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
Genode::Signal_context_capability _custom_sigh_cap { };
|
2015-03-04 20:12:14 +00:00
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
void _sigh(Signal_context_capability sigh)
|
|
|
|
{
|
|
|
|
Session_client::sigh(sigh);
|
|
|
|
}
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
/****************************************************
|
|
|
|
** Members for interaction with Timeout framework **
|
|
|
|
****************************************************/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
enum { MIN_TIMEOUT_US = 5000 };
|
|
|
|
enum { REAL_TIME_UPDATE_PERIOD_US = 500000 };
|
|
|
|
enum { MAX_INTERPOLATION_QUALITY = 3 };
|
|
|
|
enum { MAX_REMOTE_TIME_LATENCY_US = 500 };
|
|
|
|
enum { MAX_REMOTE_TIME_TRIALS = 5 };
|
2017-08-08 13:18:14 +00:00
|
|
|
enum { NR_OF_INITIAL_CALIBRATIONS = 3 * MAX_INTERPOLATION_QUALITY };
|
2017-08-08 12:32:50 +00:00
|
|
|
enum { MIN_FACTOR_LOG2 = 8 };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
Io_signal_handler _signal_handler;
|
|
|
|
Timeout_handler *_handler { nullptr };
|
|
|
|
Mutex _real_time_mutex { };
|
|
|
|
uint64_t _us { elapsed_us() };
|
|
|
|
Timestamp _ts { _timestamp() };
|
|
|
|
Duration _real_time { Microseconds { _us } };
|
|
|
|
Duration _interpolated_time { _real_time };
|
|
|
|
unsigned _interpolation_quality { 0 };
|
|
|
|
uint64_t _us_to_ts_factor { 1 };
|
|
|
|
unsigned _us_to_ts_factor_shift { 0 };
|
|
|
|
Genode::Timeout_scheduler _timeout_scheduler { *this, Microseconds { 1 } };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
Genode::Timeout_scheduler &_switch_to_timeout_framework_mode();
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
Timestamp _timestamp();
|
|
|
|
|
2019-04-09 13:46:36 +00:00
|
|
|
void _update_interpolation_quality(uint64_t min_factor,
|
|
|
|
uint64_t max_factor);
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2019-04-09 13:46:36 +00:00
|
|
|
uint64_t _ts_to_us_ratio(Timestamp ts,
|
|
|
|
uint64_t us,
|
|
|
|
unsigned shift);
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
void _update_real_time();
|
|
|
|
|
|
|
|
Duration _update_interpolated_time(Duration &interpolated_time);
|
|
|
|
|
|
|
|
void _handle_timeout();
|
|
|
|
|
|
|
|
|
|
|
|
/*****************
|
|
|
|
** Time_source **
|
|
|
|
*****************/
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
void set_timeout(Microseconds duration, Timeout_handler &handler) override;
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
Microseconds max_timeout() const override { return Microseconds(REAL_TIME_UPDATE_PERIOD_US); }
|
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
public:
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
struct Method_cannot_be_used_in_timeout_framework_mode : Genode::Exception { };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2016-05-10 15:24:51 +00:00
|
|
|
/**
|
|
|
|
* Constructor
|
2019-11-08 12:23:16 +00:00
|
|
|
*
|
|
|
|
* \param env environment used for construction (e.g. quota trading)
|
|
|
|
* \param ep entrypoint used as timeout handler execution context
|
|
|
|
* \param label optional label used in session routing
|
|
|
|
*/
|
|
|
|
Connection(Genode::Env &env,
|
|
|
|
Genode::Entrypoint & ep,
|
|
|
|
char const *label = "");
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Convenience constructor wrapper using the environment's entrypoint as
|
|
|
|
* timeout handler execution context
|
2016-05-10 15:24:51 +00:00
|
|
|
*/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
Connection(Genode::Env &env, char const *label = "");
|
2016-05-10 15:24:51 +00:00
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
~Connection() { _sig_rec.dissolve(&_default_sigh_ctx); }
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Intercept 'sigh' to keep track of customized signal handlers
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
*
|
|
|
|
* \noapi
|
|
|
|
* \deprecated Use One_shot_timeout (or Periodic_timeout) instead
|
2015-03-04 20:12:14 +00:00
|
|
|
*/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
void sigh(Signal_context_capability sigh) override
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
2020-09-11 13:04:40 +00:00
|
|
|
if (_mode == TIMEOUT_FRAMEWORK) {
|
|
|
|
throw Method_cannot_be_used_in_timeout_framework_mode();
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
}
|
2015-03-04 20:12:14 +00:00
|
|
|
_custom_sigh_cap = sigh;
|
|
|
|
Session_client::sigh(_custom_sigh_cap);
|
|
|
|
}
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/*
|
|
|
|
* Block for a time span of 'us' microseconds
|
|
|
|
*
|
|
|
|
* \noapi
|
|
|
|
* \deprecated Use One_shot_timeout (or Periodic_timeout) instead
|
|
|
|
*/
|
2019-04-09 13:46:36 +00:00
|
|
|
void usleep(uint64_t us) override
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
2020-09-11 13:04:40 +00:00
|
|
|
if (_mode == TIMEOUT_FRAMEWORK) {
|
|
|
|
throw Method_cannot_be_used_in_timeout_framework_mode();
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
}
|
2015-04-04 15:57:26 +00:00
|
|
|
/*
|
|
|
|
* Omit the interaction with the timer driver for the corner case
|
|
|
|
* of not sleeping at all. This corner case may be triggered when
|
|
|
|
* polling is combined with sleeping (as some device drivers do).
|
|
|
|
* If we passed the sleep operation to the timer driver, the
|
|
|
|
* timer would apply its policy about a minimum sleep time to
|
|
|
|
* the sleep operation, which is not desired when polling.
|
|
|
|
*/
|
|
|
|
if (us == 0)
|
|
|
|
return;
|
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
/* serialize sleep calls issued by different threads */
|
2020-02-19 15:26:40 +00:00
|
|
|
Mutex::Guard guard(_mutex);
|
2015-03-04 20:12:14 +00:00
|
|
|
|
|
|
|
/* temporarily install to the default signal handler */
|
|
|
|
if (_custom_sigh_cap.valid())
|
2013-02-22 11:38:01 +00:00
|
|
|
Session_client::sigh(_default_sigh_cap);
|
2012-10-29 20:05:32 +00:00
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
/* trigger timeout at default signal handler */
|
|
|
|
trigger_once(us);
|
|
|
|
_sig_rec.wait_for_signal();
|
2012-10-29 20:05:32 +00:00
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
/* revert custom signal handler if registered */
|
|
|
|
if (_custom_sigh_cap.valid())
|
2012-10-29 20:05:32 +00:00
|
|
|
Session_client::sigh(_custom_sigh_cap);
|
2015-03-04 20:12:14 +00:00
|
|
|
}
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/*
|
|
|
|
* Block for a time span of 'ms' milliseconds
|
|
|
|
*
|
|
|
|
* \noapi
|
|
|
|
* \deprecated Use One_shot_timeout (or Periodic_timeout) instead
|
|
|
|
*/
|
2019-04-09 13:46:36 +00:00
|
|
|
void msleep(uint64_t ms) override
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
2020-09-11 13:04:40 +00:00
|
|
|
if (_mode == TIMEOUT_FRAMEWORK) {
|
|
|
|
throw Method_cannot_be_used_in_timeout_framework_mode();
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
}
|
2015-03-04 20:12:14 +00:00
|
|
|
usleep(1000*ms);
|
|
|
|
}
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
|
2020-09-11 13:04:40 +00:00
|
|
|
/*****************
|
|
|
|
** Time_source **
|
|
|
|
*****************/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
Duration curr_time() override;
|
2015-03-04 20:12:14 +00:00
|
|
|
};
|
2011-12-22 15:19:25 +00:00
|
|
|
|
|
|
|
#endif /* _INCLUDE__TIMER_SESSION__CONNECTION_H_ */
|