2016-11-07 12:59:37 +00:00
|
|
|
#
|
|
|
|
# Build
|
|
|
|
#
|
|
|
|
|
2019-05-15 14:17:20 +00:00
|
|
|
#
|
|
|
|
# The size of the result buffers in the fast-time-polling test
|
|
|
|
#
|
|
|
|
proc fast_polling_buf_size { } {
|
|
|
|
|
2021-01-13 09:15:10 +00:00
|
|
|
if {[expr [have_board pbxa9] && [have_spec foc]]} { return 40000000 }
|
|
|
|
if {[expr [have_board imx53_qsb_tz] && [have_spec hw]]} { return 40000000 }
|
|
|
|
if {[expr [have_board rpi] && [have_spec hw]]} { return 40000000 }
|
2022-08-09 11:32:38 +00:00
|
|
|
if {[expr [have_board virt_qemu_riscv] && [have_spec hw]]} { return 15000000 }
|
2019-05-15 14:17:20 +00:00
|
|
|
return 80000000
|
|
|
|
}
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
#
|
2017-06-16 12:49:24 +00:00
|
|
|
# Wether the platform allows for timeouts that trigger with a precision < 50 milliseconds
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
#
|
2017-06-16 12:49:24 +00:00
|
|
|
proc precise_timeouts { } {
|
|
|
|
|
|
|
|
#
|
|
|
|
# On QEMU, NOVA uses the pretty unstable TSC emulation as primary time source.
|
|
|
|
#
|
|
|
|
if {[have_include "power_on/qemu"] && [have_spec nova]} { return false }
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
#
|
2017-06-16 12:49:24 +00:00
|
|
|
# Wether the platform allows for a timestamp that has a precision < 1 millisecond
|
|
|
|
#
|
|
|
|
proc precise_time { } {
|
|
|
|
|
|
|
|
#
|
|
|
|
# On QEMU, timing is not stable enough for microseconds precision
|
|
|
|
#
|
|
|
|
if {[have_include "power_on/qemu"]} { return false }
|
|
|
|
|
|
|
|
#
|
|
|
|
# On ARM, we do not have a component-local time source in hardware. The ARM
|
|
|
|
# performance counter has no reliable frequency as the ARM idle command
|
|
|
|
# halts the counter. Thus, we do not use local time interpolation on ARM.
|
|
|
|
# Except we're on the HW kernel. In this case we can read out the kernel
|
|
|
|
# time instead.
|
|
|
|
#
|
|
|
|
if {[expr [have_spec arm] && ![have_spec hw]]} { return false }
|
2017-08-25 16:05:32 +00:00
|
|
|
|
2017-11-28 09:41:15 +00:00
|
|
|
#
|
|
|
|
# On x86 64 bit with SeL4, the test needs around 80MB that must be
|
|
|
|
# completely composed of 4KB-pages due to current limitations of the SeL4
|
|
|
|
# port. Thus, Core must flush the page table caches pretty often during
|
|
|
|
# the test which is an expensive high-prior operation and makes it
|
|
|
|
# impossible to provide a highly precise time.
|
|
|
|
#
|
|
|
|
if {[have_spec x86_64] && [have_spec sel4]} { return false }
|
|
|
|
|
2017-08-25 16:05:32 +00:00
|
|
|
#
|
|
|
|
# Older x86 machines do not have an invariant timestamp
|
|
|
|
#
|
|
|
|
if {[have_spec x86] && ![have_spec hw]} { return dynamic }
|
|
|
|
|
2017-05-31 14:08:08 +00:00
|
|
|
return true
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
}
|
|
|
|
|
2017-06-23 14:51:43 +00:00
|
|
|
#
|
|
|
|
# Wether the platform allows for a 'Timer::Connection::elapsed_ms'
|
|
|
|
# implementation that has a precision < 2 ms
|
|
|
|
#
|
|
|
|
proc precise_ref_time { } {
|
|
|
|
|
|
|
|
#
|
2017-08-23 17:57:41 +00:00
|
|
|
# On Fiasco and Fiasco.OC, that use kernel timing, 'elapsed_ms' is
|
|
|
|
# pretty inprecise/unsteady (up to 3 ms deviation) for a reason that
|
|
|
|
# is not clearly determined yet. So, on these platforms, our locally
|
|
|
|
# interpolated time seems to be fine but the reference time is bad.
|
2017-06-23 14:51:43 +00:00
|
|
|
#
|
2017-08-23 17:57:41 +00:00
|
|
|
if {[have_spec foc] || [have_spec fiasco]} { return false }
|
2017-06-23 14:51:43 +00:00
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
2023-05-04 14:18:04 +00:00
|
|
|
build { core init lib/ld drivers/platform timer test/timeout }
|
2016-11-07 12:59:37 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Boot image
|
|
|
|
#
|
|
|
|
|
|
|
|
create_boot_directory
|
|
|
|
|
2023-05-04 14:18:04 +00:00
|
|
|
install_config {
|
2017-06-22 12:55:42 +00:00
|
|
|
<config prio_levels="2">
|
2016-11-07 12:59:37 +00:00
|
|
|
<parent-provides>
|
|
|
|
<service name="ROM"/>
|
|
|
|
<service name="IRQ"/>
|
|
|
|
<service name="IO_MEM"/>
|
|
|
|
<service name="IO_PORT"/>
|
|
|
|
<service name="PD"/>
|
|
|
|
<service name="RM"/>
|
|
|
|
<service name="CPU"/>
|
|
|
|
<service name="LOG"/>
|
|
|
|
</parent-provides>
|
|
|
|
<default-route>
|
|
|
|
<any-service><parent/><any-child/></any-service>
|
|
|
|
</default-route>
|
2017-05-07 20:36:11 +00:00
|
|
|
<default caps="100"/>
|
2016-11-07 12:59:37 +00:00
|
|
|
<start name="timer">
|
|
|
|
<resource name="RAM" quantum="10M"/>
|
|
|
|
<provides><service name="Timer"/></provides>
|
|
|
|
</start>
|
2017-06-22 12:55:42 +00:00
|
|
|
<start name="test" priority="-1">
|
2016-11-07 12:59:37 +00:00
|
|
|
<binary name="test-timeout"/>
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
<resource name="RAM" quantum="250M"/>
|
2017-06-16 12:49:24 +00:00
|
|
|
<config precise_time="} [precise_time] {"
|
2017-08-23 17:57:41 +00:00
|
|
|
precise_ref_time="} [precise_ref_time] {"
|
2019-05-15 14:17:20 +00:00
|
|
|
precise_timeouts="} [precise_timeouts] {"
|
|
|
|
fast_polling_buf_size="} [fast_polling_buf_size] {"/>
|
2016-11-07 12:59:37 +00:00
|
|
|
</start>
|
|
|
|
</config>
|
|
|
|
}
|
|
|
|
|
2023-05-04 14:18:04 +00:00
|
|
|
build_boot_image [build_artifacts]
|
2016-11-07 12:59:37 +00:00
|
|
|
|
|
|
|
#
|
|
|
|
# Execution
|
|
|
|
#
|
|
|
|
|
2017-05-23 13:05:55 +00:00
|
|
|
append qemu_args "-nographic "
|
2016-11-07 12:59:37 +00:00
|
|
|
|
os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes #2400
2017-04-21 22:52:23 +00:00
|
|
|
run_genode_until "child \"test\" exited with exit value.*\n" 900
|
|
|
|
grep_output {\[init\] child "test" exited with exit value}
|
|
|
|
compare_output_to {[init] child "test" exited with exit value 0}
|