2011-12-22 15:19:25 +00:00
|
|
|
/*
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
* \brief Connection to timer service and timeout scheduler
|
2011-12-22 15:19:25 +00:00
|
|
|
* \author Norman Feske
|
|
|
|
* \date 2008-08-22
|
|
|
|
*/
|
|
|
|
|
|
|
|
/*
|
2017-02-20 12:23:52 +00:00
|
|
|
* Copyright (C) 2008-2017 Genode Labs GmbH
|
2011-12-22 15:19:25 +00:00
|
|
|
*
|
|
|
|
* This file is part of the Genode OS framework, which is distributed
|
2017-02-20 12:23:52 +00:00
|
|
|
* under the terms of the GNU Affero General Public License version 3.
|
2011-12-22 15:19:25 +00:00
|
|
|
*/
|
|
|
|
|
|
|
|
#ifndef _INCLUDE__TIMER_SESSION__CONNECTION_H_
|
|
|
|
#define _INCLUDE__TIMER_SESSION__CONNECTION_H_
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/* Genode includes */
|
2011-12-22 15:19:25 +00:00
|
|
|
#include <timer_session/client.h>
|
|
|
|
#include <base/connection.h>
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
#include <util/reconstructible.h>
|
|
|
|
#include <base/entrypoint.h>
|
|
|
|
#include <timer/timeout.h>
|
|
|
|
#include <trace/timestamp.h>
|
2011-12-22 15:19:25 +00:00
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
namespace Timer
|
|
|
|
{
|
|
|
|
class Connection;
|
|
|
|
template <typename> class Periodic_timeout;
|
|
|
|
template <typename> class One_shot_timeout;
|
|
|
|
}
|
2015-03-04 20:12:14 +00:00
|
|
|
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/**
|
|
|
|
* Periodic timeout that is linked to a custom handler, scheduled when constructed
|
|
|
|
*/
|
|
|
|
template <typename HANDLER>
|
|
|
|
struct Timer::Periodic_timeout : private Genode::Noncopyable
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
|
|
|
private:
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
using Duration = Genode::Duration;
|
|
|
|
using Timeout = Genode::Timeout;
|
|
|
|
using Timeout_scheduler = Genode::Timeout_scheduler;
|
|
|
|
using Microseconds = Genode::Microseconds;
|
|
|
|
|
|
|
|
typedef void (HANDLER::*Handler_method)(Duration);
|
|
|
|
|
|
|
|
Timeout _timeout;
|
|
|
|
|
|
|
|
struct Handler : Timeout::Handler
|
|
|
|
{
|
|
|
|
HANDLER &object;
|
|
|
|
Handler_method const method;
|
|
|
|
|
|
|
|
Handler(HANDLER &object, Handler_method method)
|
|
|
|
: object(object), method(method) { }
|
|
|
|
|
|
|
|
|
|
|
|
/**********************
|
|
|
|
** Timeout::Handler **
|
|
|
|
**********************/
|
|
|
|
|
|
|
|
void handle_timeout(Duration curr_time) override {
|
|
|
|
(object.*method)(curr_time); }
|
|
|
|
|
|
|
|
} _handler;
|
|
|
|
|
|
|
|
public:
|
|
|
|
|
|
|
|
Periodic_timeout(Timeout_scheduler &timeout_scheduler,
|
|
|
|
HANDLER &object,
|
|
|
|
Handler_method method,
|
|
|
|
Microseconds duration)
|
|
|
|
:
|
|
|
|
_timeout(timeout_scheduler), _handler(object, method)
|
|
|
|
{
|
|
|
|
_timeout.schedule_periodic(duration, _handler);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* One-shot timeout that is linked to a custom handler, scheduled manually
|
|
|
|
*/
|
|
|
|
template <typename HANDLER>
|
|
|
|
class Timer::One_shot_timeout : private Genode::Noncopyable
|
|
|
|
{
|
|
|
|
private:
|
|
|
|
|
|
|
|
using Duration = Genode::Duration;
|
|
|
|
using Timeout = Genode::Timeout;
|
|
|
|
using Timeout_scheduler = Genode::Timeout_scheduler;
|
|
|
|
using Microseconds = Genode::Microseconds;
|
|
|
|
|
|
|
|
typedef void (HANDLER::*Handler_method)(Duration);
|
|
|
|
|
|
|
|
Timeout _timeout;
|
|
|
|
|
|
|
|
struct Handler : Timeout::Handler
|
|
|
|
{
|
|
|
|
HANDLER &object;
|
|
|
|
Handler_method const method;
|
|
|
|
|
|
|
|
Handler(HANDLER &object, Handler_method method)
|
|
|
|
: object(object), method(method) { }
|
|
|
|
|
|
|
|
|
|
|
|
/**********************
|
|
|
|
** Timeout::Handler **
|
|
|
|
**********************/
|
|
|
|
|
|
|
|
void handle_timeout(Duration curr_time) override {
|
|
|
|
(object.*method)(curr_time); }
|
|
|
|
|
|
|
|
} _handler;
|
|
|
|
|
|
|
|
public:
|
|
|
|
|
|
|
|
One_shot_timeout(Timeout_scheduler &timeout_scheduler,
|
|
|
|
HANDLER &object,
|
|
|
|
Handler_method method)
|
|
|
|
: _timeout(timeout_scheduler), _handler(object, method) { }
|
|
|
|
|
|
|
|
void schedule(Microseconds duration) {
|
|
|
|
_timeout.schedule_one_shot(duration, _handler); }
|
|
|
|
|
|
|
|
void discard() { _timeout.discard(); }
|
|
|
|
|
|
|
|
bool scheduled() { return _timeout.scheduled(); }
|
|
|
|
};
|
|
|
|
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Connection to timer service and timeout scheduler
|
|
|
|
*
|
|
|
|
* Multiplexes a timer session amongst different timeouts.
|
|
|
|
*/
|
|
|
|
class Timer::Connection : public Genode::Connection<Session>,
|
|
|
|
public Session_client,
|
|
|
|
private Genode::Time_source,
|
|
|
|
public Genode::Timeout_scheduler
|
|
|
|
{
|
|
|
|
private:
|
|
|
|
|
|
|
|
using Timeout = Genode::Timeout;
|
|
|
|
using Timeout_handler = Genode::Time_source::Timeout_handler;
|
|
|
|
using Timestamp = Genode::Trace::Timestamp;
|
|
|
|
using Duration = Genode::Duration;
|
2020-02-19 15:26:40 +00:00
|
|
|
using Mutex = Genode::Mutex;
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
using Microseconds = Genode::Microseconds;
|
|
|
|
using Milliseconds = Genode::Milliseconds;
|
|
|
|
using Entrypoint = Genode::Entrypoint;
|
|
|
|
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
/*
|
|
|
|
* Noncopyable
|
|
|
|
*/
|
|
|
|
Connection(Connection const &);
|
|
|
|
Connection &operator = (Connection const &);
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/*
|
|
|
|
* The mode determines which interface of the timer connection is
|
|
|
|
* enabled. Initially, a timer connection is in LEGACY mode. When in
|
|
|
|
* MODERN mode, a call to the LEGACY interface causes an exception.
|
|
|
|
* When in LEGACY mode, a call to the MODERN interface causes a switch
|
|
|
|
* to the MODERN mode. The LEGACY interface is deprecated. Please
|
|
|
|
* prefer using the MODERN interface.
|
|
|
|
*
|
|
|
|
* LEGACY = Timer Session interface, blocking calls usleep and msleep
|
|
|
|
* MODERN = more precise curr_time, non-blocking and multiplexed
|
|
|
|
* handling with Periodic_timeout resp. One_shot_timeout
|
|
|
|
*/
|
|
|
|
enum Mode { LEGACY, MODERN };
|
|
|
|
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
Mode _mode { LEGACY };
|
2020-02-19 15:26:40 +00:00
|
|
|
Mutex _mutex { };
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
Genode::Signal_receiver _sig_rec { };
|
|
|
|
Genode::Signal_context _default_sigh_ctx { };
|
2016-05-10 15:24:51 +00:00
|
|
|
|
|
|
|
Genode::Signal_context_capability
|
|
|
|
_default_sigh_cap = _sig_rec.manage(&_default_sigh_ctx);
|
|
|
|
|
Follow practices suggested by "Effective C++"
The patch adjust the code of the base, base-<kernel>, and os repository.
To adapt existing components to fix violations of the best practices
suggested by "Effective C++" as reported by the -Weffc++ compiler
argument. The changes follow the patterns outlined below:
* A class with virtual functions can no longer publicly inherit base
classed without a vtable. The inherited object may either be moved
to a member variable, or inherited privately. The latter would be
used for classes that inherit 'List::Element' or 'Avl_node'. In order
to enable the 'List' and 'Avl_tree' to access the meta data, the
'List' must become a friend.
* Instead of adding a virtual destructor to abstract base classes,
we inherit the new 'Interface' class, which contains a virtual
destructor. This way, single-line abstract base classes can stay
as compact as they are now. The 'Interface' utility resides in
base/include/util/interface.h.
* With the new warnings enabled, all member variables must be explicitly
initialized. Basic types may be initialized with '='. All other types
are initialized with braces '{ ... }' or as class initializers. If
basic types and non-basic types appear in a row, it is nice to only
use the brace syntax (also for basic types) and align the braces.
* If a class contains pointers as members, it must now also provide a
copy constructor and assignment operator. In the most cases, one
would make them private, effectively disallowing the objects to be
copied. Unfortunately, this warning cannot be fixed be inheriting
our existing 'Noncopyable' class (the compiler fails to detect that
the inheriting class cannot be copied and still gives the error).
For now, we have to manually add declarations for both the copy
constructor and assignment operator as private class members. Those
declarations should be prepended with a comment like this:
/*
* Noncopyable
*/
Thread(Thread const &);
Thread &operator = (Thread const &);
In the future, we should revisit these places and try to replace
the pointers with references. In the presence of at least one
reference member, the compiler would no longer implicitly generate
a copy constructor. So we could remove the manual declaration.
Issue #465
2017-12-21 14:42:15 +00:00
|
|
|
Genode::Signal_context_capability _custom_sigh_cap { };
|
2015-03-04 20:12:14 +00:00
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
void _enable_modern_mode();
|
|
|
|
|
|
|
|
void _sigh(Signal_context_capability sigh)
|
|
|
|
{
|
|
|
|
Session_client::sigh(sigh);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
/*************************
|
|
|
|
** Time_source helpers **
|
|
|
|
*************************/
|
|
|
|
|
|
|
|
enum { MIN_TIMEOUT_US = 5000 };
|
|
|
|
enum { REAL_TIME_UPDATE_PERIOD_US = 500000 };
|
|
|
|
enum { MAX_INTERPOLATION_QUALITY = 3 };
|
|
|
|
enum { MAX_REMOTE_TIME_LATENCY_US = 500 };
|
|
|
|
enum { MAX_REMOTE_TIME_TRIALS = 5 };
|
2017-08-08 13:18:14 +00:00
|
|
|
enum { NR_OF_INITIAL_CALIBRATIONS = 3 * MAX_INTERPOLATION_QUALITY };
|
2017-08-08 12:32:50 +00:00
|
|
|
enum { MIN_FACTOR_LOG2 = 8 };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
Genode::Io_signal_handler<Connection> _signal_handler;
|
|
|
|
|
|
|
|
Timeout_handler *_handler { nullptr };
|
2020-02-19 15:26:40 +00:00
|
|
|
Mutex _real_time_mutex { };
|
2019-04-09 13:46:36 +00:00
|
|
|
uint64_t _us { elapsed_us() };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
Timestamp _ts { _timestamp() };
|
2017-08-04 13:28:56 +00:00
|
|
|
Duration _real_time { Microseconds(_us) };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
Duration _interpolated_time { _real_time };
|
|
|
|
unsigned _interpolation_quality { 0 };
|
2019-04-09 13:46:36 +00:00
|
|
|
uint64_t _us_to_ts_factor { 1UL };
|
2017-06-19 10:56:16 +00:00
|
|
|
unsigned _us_to_ts_factor_shift { 0 };
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
Timestamp _timestamp();
|
|
|
|
|
2019-04-09 13:46:36 +00:00
|
|
|
void _update_interpolation_quality(uint64_t min_factor,
|
|
|
|
uint64_t max_factor);
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
2019-04-09 13:46:36 +00:00
|
|
|
uint64_t _ts_to_us_ratio(Timestamp ts,
|
|
|
|
uint64_t us,
|
|
|
|
unsigned shift);
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
void _update_real_time();
|
|
|
|
|
|
|
|
Duration _update_interpolated_time(Duration &interpolated_time);
|
|
|
|
|
|
|
|
void _handle_timeout();
|
|
|
|
|
|
|
|
|
|
|
|
/*****************
|
|
|
|
** Time_source **
|
|
|
|
*****************/
|
|
|
|
|
|
|
|
void schedule_timeout(Microseconds duration, Timeout_handler &handler) override;
|
|
|
|
Microseconds max_timeout() const override { return Microseconds(REAL_TIME_UPDATE_PERIOD_US); }
|
|
|
|
|
|
|
|
|
|
|
|
/*******************************
|
|
|
|
** Timeout_scheduler helpers **
|
|
|
|
*******************************/
|
|
|
|
|
|
|
|
Genode::Alarm_timeout_scheduler _scheduler { *this };
|
|
|
|
|
|
|
|
|
|
|
|
/***********************
|
|
|
|
** Timeout_scheduler **
|
|
|
|
***********************/
|
|
|
|
|
|
|
|
void _schedule_one_shot(Timeout &timeout, Microseconds duration) override;
|
|
|
|
void _schedule_periodic(Timeout &timeout, Microseconds duration) override;
|
|
|
|
void _discard(Timeout &timeout) override;
|
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
public:
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
struct Cannot_use_both_legacy_and_modern_interface : Genode::Exception { };
|
|
|
|
|
2016-05-10 15:24:51 +00:00
|
|
|
/**
|
|
|
|
* Constructor
|
2019-11-08 12:23:16 +00:00
|
|
|
*
|
|
|
|
* \param env environment used for construction (e.g. quota trading)
|
|
|
|
* \param ep entrypoint used as timeout handler execution context
|
|
|
|
* \param label optional label used in session routing
|
|
|
|
*/
|
|
|
|
Connection(Genode::Env &env,
|
|
|
|
Genode::Entrypoint & ep,
|
|
|
|
char const *label = "");
|
|
|
|
|
|
|
|
/**
|
|
|
|
* Convenience constructor wrapper using the environment's entrypoint as
|
|
|
|
* timeout handler execution context
|
2016-05-10 15:24:51 +00:00
|
|
|
*/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
Connection(Genode::Env &env, char const *label = "");
|
2016-05-10 15:24:51 +00:00
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
~Connection() { _sig_rec.dissolve(&_default_sigh_ctx); }
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Intercept 'sigh' to keep track of customized signal handlers
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
*
|
|
|
|
* \noapi
|
|
|
|
* \deprecated Use One_shot_timeout (or Periodic_timeout) instead
|
2015-03-04 20:12:14 +00:00
|
|
|
*/
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
void sigh(Signal_context_capability sigh) override
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
if (_mode == MODERN) {
|
|
|
|
throw Cannot_use_both_legacy_and_modern_interface();
|
|
|
|
}
|
2015-03-04 20:12:14 +00:00
|
|
|
_custom_sigh_cap = sigh;
|
|
|
|
Session_client::sigh(_custom_sigh_cap);
|
|
|
|
}
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/*
|
|
|
|
* Block for a time span of 'us' microseconds
|
|
|
|
*
|
|
|
|
* \noapi
|
|
|
|
* \deprecated Use One_shot_timeout (or Periodic_timeout) instead
|
|
|
|
*/
|
2019-04-09 13:46:36 +00:00
|
|
|
void usleep(uint64_t us) override
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
if (_mode == MODERN) {
|
|
|
|
throw Cannot_use_both_legacy_and_modern_interface();
|
|
|
|
}
|
2015-04-04 15:57:26 +00:00
|
|
|
/*
|
|
|
|
* Omit the interaction with the timer driver for the corner case
|
|
|
|
* of not sleeping at all. This corner case may be triggered when
|
|
|
|
* polling is combined with sleeping (as some device drivers do).
|
|
|
|
* If we passed the sleep operation to the timer driver, the
|
|
|
|
* timer would apply its policy about a minimum sleep time to
|
|
|
|
* the sleep operation, which is not desired when polling.
|
|
|
|
*/
|
|
|
|
if (us == 0)
|
|
|
|
return;
|
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
/* serialize sleep calls issued by different threads */
|
2020-02-19 15:26:40 +00:00
|
|
|
Mutex::Guard guard(_mutex);
|
2015-03-04 20:12:14 +00:00
|
|
|
|
|
|
|
/* temporarily install to the default signal handler */
|
|
|
|
if (_custom_sigh_cap.valid())
|
2013-02-22 11:38:01 +00:00
|
|
|
Session_client::sigh(_default_sigh_cap);
|
2012-10-29 20:05:32 +00:00
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
/* trigger timeout at default signal handler */
|
|
|
|
trigger_once(us);
|
|
|
|
_sig_rec.wait_for_signal();
|
2012-10-29 20:05:32 +00:00
|
|
|
|
2015-03-04 20:12:14 +00:00
|
|
|
/* revert custom signal handler if registered */
|
|
|
|
if (_custom_sigh_cap.valid())
|
2012-10-29 20:05:32 +00:00
|
|
|
Session_client::sigh(_custom_sigh_cap);
|
2015-03-04 20:12:14 +00:00
|
|
|
}
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
/*
|
|
|
|
* Block for a time span of 'ms' milliseconds
|
|
|
|
*
|
|
|
|
* \noapi
|
|
|
|
* \deprecated Use One_shot_timeout (or Periodic_timeout) instead
|
|
|
|
*/
|
2019-04-09 13:46:36 +00:00
|
|
|
void msleep(uint64_t ms) override
|
2015-03-04 20:12:14 +00:00
|
|
|
{
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
if (_mode == MODERN) {
|
|
|
|
throw Cannot_use_both_legacy_and_modern_interface();
|
|
|
|
}
|
2015-03-04 20:12:14 +00:00
|
|
|
usleep(1000*ms);
|
|
|
|
}
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
|
|
|
|
|
|
|
|
/***********************************
|
|
|
|
** Timeout_scheduler/Time_source **
|
|
|
|
***********************************/
|
|
|
|
|
|
|
|
Duration curr_time() override;
|
2015-03-04 20:12:14 +00:00
|
|
|
};
|
2011-12-22 15:19:25 +00:00
|
|
|
|
|
|
|
#endif /* _INCLUDE__TIMER_SESSION__CONNECTION_H_ */
|