Commit Graph

16 Commits

Author SHA1 Message Date
Martin Stein
8eecb39792 test/timeout: configurable fast-polling buffers
On some platforms (foc+pbxa9, hw+imx53_qsb_tz, hw+rpi) the default buffer size
is to much for the RAM available on the board. Thus, decrease the buffer size
and therefore the number of polls for these platforms only.

Fixes #3354
2019-05-27 14:46:54 +02:00
Norman Feske
bf62d6b896 Move timer from os to base repository
Since the timer and timeout handling is part of the base library (the
dynamic linker), it belongs to the base repository.

Besides moving the timer and its related infrastructure (alarm, timeout
libs, tests) to the base repository, this patch also moves the timer
from the 'drivers' subdirectory directly to 'src' and disamibuates the
timer's build locations for the various kernels. Otherwise the different
timer implementations could interfere with each other when using one
build directory with multiple kernels.

Note that this patch changes the include paths for the former os/timer,
os/alarm.h, os/duration.h, and os/timed_semaphore.h to base/.

Issue #3101
2019-01-14 12:33:57 +01:00
Stefan Kalkowski
36fe50ebad base: unify mp_server and affinity test
* remove outdated cpufreq test for Arndale
* execute new SMP test on hardware not in Qemu in nightly tests

Ref #3041
2018-11-27 11:36:35 +01:00
Martin Stein
3d12e7b242 timeout x86_64 sel4: do not expect a precise time
On x86 64 bit with SeL4, the test needs around 80MB that must be
completely composed of 4KB-pages due to current limitations of the SeL4
port. Thus, Core must flush the page table caches pretty often during
the test which is an expensive high-prior operation and makes it
impossible to provide a highly precise time.
2017-11-30 11:23:20 +01:00
Martin Stein
12eb7a44d0 x86 timeout test: consider instable tsc (quickfix)
This is a quickfix to avoid testing microseconds precise time on older x86
machines that have no invariant TSC as interpolation source.

Ref #2400
2017-08-30 10:00:01 +02:00
Martin Stein
ffaf99ae86 timeout test: remove error-limit exception for PIT
The problems with the PIT timer drivers were fixed so it is not necessary
anymore to treat them special.

Ref #2400
2017-08-28 16:49:49 +02:00
Martin Stein
67fc1ec42b timeout test: prioritize timer driver over test
Ref #2400
2017-06-29 12:00:00 +02:00
Martin Stein
61f59818d3 pit/fiasco timeout: raise time error tolerance
On platforms that use the PIT timer driver, 'elapsed_ms' is pretty
inprecise/unsteady (up to 3 ms deviation) for a reason that is not
clearly determined yet. On Fiasco and Fiasco.OC, that use kernel timing,
it is the same. So, on these platforms, our locally interpolated time
seems to be fine but the reference time is bad. Until this is fixed, we
raise the error tolerance for these platforms in the run script.

Ref #2400
2017-06-29 11:59:59 +02:00
Martin Stein
5fec4a2166 timeout test: raise error tolerance on nova + qemu
On QEMU, NOVA uses the pretty unstable TSC emulation as primary time
source. Thus, timeouts do not trigger with the common precision (< 50
ms). Use an error tolerance of 200 ms for this platform constellation.

Ref #2400
2017-06-29 11:59:49 +02:00
Martin Stein
23337eb6e7 run/timeout: run also on arm w/o hw and qemu
On platforms were we do not have local time interpolation we can simply
skip the first test stage in the timeout test. This way, we can at least
test the rest.

Fixes #2435
2017-05-31 17:50:28 +02:00
Christian Helmuth
8bd0efced6 Remove obsolete RAM/CAP services from run scripts
Adapted launchpad and also the rm_fault and resource_request tests.

Issue #2407
2017-05-31 13:16:22 +02:00
Stefan Kalkowski
0fb672b493 run: use default Qemu memory size for x86
Fix #2428
2017-05-31 13:16:19 +02:00
Martin Stein
c70fed29f7 os/timer: interpolate time via timestamps
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end.  Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.

This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.

This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled.  Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.

With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.

This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.

Open issues:

* The timeout test fails on Raspberry PI because of precision errors in the
  first stage. However, this does not render the framework unusable in general
  on the RPI but merely is an issue when speaking of microseconds precision.

* If we run on ARM with another Kernel than HW the timestamp speed may
  continuously vary from almost 0 up to CPU speed. The Timer, however,
  only uses interpolation if the timestamp speed remained stable (12.5%
  tolerance) for at least 3 observation periods. Currently, one period is
  100ms, so its 300ms. As long as this is not the case,
  Timer_session::elapsed_ms is called instead.

  Anyway, it might happen that the CPU load was stable for some time so
  interpolation becomes active and now the timestamp speed drops. In the
  worst case, we would now have 100ms of slowed down time. The bad thing
  about it would be, that this also affects the timeout of the period.
  Thus, it might "freeze" the local time for more than 100ms.

  On the other hand, if the timestamp speed suddenly raises after some
  stable time, interpolated time can get too fast. This would shorten the
  period but nonetheless may result in drifting away into the far future.
  Now we would have the problem that we can't deliver the real time
  anymore until it has caught up because the output of Timer::curr_time
  shall be monotone. So, effectively local time might "freeze" again for
  more than 100ms.

  It would be a solution to not use the Trace::timestamp on ARM w/o HW but
  a function whose return value causes the Timer to never use
  interpolation because of its stability policy.

Fixes #2400
2017-05-31 13:16:11 +02:00
Norman Feske
773e08976d Assign cap quotas in run scripts and recipes
Issue #2398
2017-05-31 13:16:06 +02:00
Norman Feske
ccffbb0dfc Build dynamically linked executables by default
Fixes #2184
2016-12-14 11:22:27 +01:00
Martin Stein
791138ee63 os: introduce and test timeout framework
Ref #2170
2016-11-30 13:38:04 +01:00