This commit fixes the following issues regarding cache maintainance
under ARM:
* read out I-, and D-cache line size at runtime and use the correct one
* remove 'update_data_region' call from unprivileged syscalls
* rename 'update_instr_region' syscall to 'cache_coherent_region' to
reflect what it doing, namely make I-, and D-cache coherent
* restrict 'cache_coherent_region' syscall to one page at a time
* lookup the region given in a 'cache_coherent_region' syscall in the
page-table of the PD to prevent machine exceptions in the kernel
* only clean D-cache lines, do not invalidate them when pages where
added on Cortex-A8 and ARMv6 (MMU sees phys. memory here)
* remove unused code relicts of cache maintainance
In addition it introduces per architecture memory clearance functions
used by core, when preparing new dataspaces. Thereby, it optimizes:
* on ARMv7 using per-word assignments
* on ARMv8 using cacheline zeroing
* on x86_64 using 'rept stosq' assembler instruction
Fix#3685
* Introduces pending_signal syscall to check for new signals for the
calling thread without blocking
* Implements pending_signal in the base-library specific for hw to use the
new syscall
Fix#3217
* Introduce 64-bit tick counter
* Let the timer always count when possible, also if it already fired
* Simplify the kernel syscall API to have one current time call,
which returns the elapsed microseconds since boot
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes#2400
There was a race when the component entrypoint wanted to do
'wait_and_dispatch_one_signal'. In this function it raises a flag for
the signal proxy thread to notice that the entrypoint also wants to
block for signals. When the flag is set and the signal proxy wakes up
with a new signal, it tried to cancel the blocking of the entrypoint.
However, if the entrypoint had not reached the signal blocking at this
point, the cancel blocking failed without a solution. Now, the new
Kernel::cancel_next_signal_blocking call solves the problem by storing a
request to cancel the next signal blocking of a thread immediately
without blocking itself.
Ref #2284
This cleans up the syscalls that are mainly used to control the
scheduling readiness of a thread. The different use cases and
requirements were somehow mixed together in the previous interface. The
new syscall set is:
1) pause_thread and resume_thread
They don't affect the state of the thread (IPC, signalling, etc.) but
merely decide wether the thread is allowed for scheduling or not, the
so-called pause state. The pause state is orthogonal to the thread state
and masks it when it comes to scheduling. In contrast to the stopped
state, which is described in "stop_thread and restart_thread", the
thread state and the UTCB content of a thread may change while in the
paused state. However, the register state of a thread doesn't change
while paused. The "pause" and "resume" syscalls are both core-restricted
and may target any thread. They are used as back end for the CPU session
calls "pause" and "resume". The "pause/resume" feature is made for
applications like the GDB monitor that transparently want to stop and
continue the execution of a thread no matter what state the thread is
in.
2) stop_thread and restart_thread
The stop syscall can only be used on a thread in the non-blocking
("active") thread state. The thread then switches to the "stopped"
thread state in wich it explicitely waits for a restart. The restart
syscall can only be used on a thread in the "stopped" or the "active"
thread state. The thread then switches back to the "active" thread state
and the syscall returns whether the thread was stopped. Both syscalls
are not core-restricted. "Stop" always targets the calling thread while
"restart" may target any thread in the same PD as the caller. Thread
state and UTCB content of a thread don't change while in the stopped
state. The "stop/restart" feature is used when an active thread wants to
wait for an event that is not known to the kernel. Actually the syscalls
are used when waiting for locks and on thread exit.
3) cancel_thread_blocking
Does cleanly cancel a cancelable blocking thread state (IPC, signalling,
stopped). The thread whose blocking was cancelled goes back to the
"active" thread state. It may receive a syscall return value that
reflects the cancellation. This syscall doesn't affect the pause state
of the thread which means that it may still not get scheduled. The
syscall is core-restricted and may target any thread.
4) yield_thread
Does its best that a thread is scheduled as few as possible in the
current scheduling super-period without touching the thread or pause
state. In the next superperiod, however, the thread is scheduled
"normal" again. The syscall is not core-restricted and always targets
the caller.
Fixes#2104
* Adds public timeout syscalls to kernel API
* Kernel::timeout installs a timeout and binds a signal context to it that
shall trigger once the timeout expired
* With Kernel::timeout_max_us, one can get the maximum installable timeout
* Kernel::timeout_age_us returns the time that has passed since the
calling threads last timeout installation
* Removes all device specific back-ends for the base-hw timer driver and
implements a generic back-end taht uses the kernel timeout API
* Adds assertions about the kernel timer frequency that originate from the
requirements of the the kernel timeout API and adjusts all timers
accordingly by using the their internal dividers
* Introduces the Kernel::Clock class. As member of each Kernel::Cpu object
it combines the management of the timer of the CPU with a timeout scheduler.
Not only the timeout API uses the timeout scheduler but also the CPUs job
scheduler for installing scheduling timeouts.
* Introduces the Kernel::time_t type for timer tic values and values inherited
from timer tics (like microseconds).
Fixes#1972
When capabilities are delegated to components, they are added to the UTCB of the
target thread. Before the thread is able to take out the capability id out of
the UTCB and adapt the user-level capability reference counter, it might happen
that another thread of the same component deletes the same capability because
its user-level reference counter reached zero. If the kernel then destroys the
capability, before the same capability id is taken out of all UTCBs, an
inconsitent view in the component is the result. To keep an consistent view in
the multi-threading scenario, the kernel now counts how often it puts a
capability into a UTCB. The threads on the other hand hint the kernel when they
took capabilities out of the UTCB, so the kernel can decrement the counter
again. Only when the counter is zero, capabilities can get destructed.
Fix#1623
'block_for_signal' and 'pending_signal' now set pending flag in signal context
in order to determine pending signal. The context list is also used by the
'Signal_receiver' during destruction.
Fixes#1738
This patch changes the top-level directory layout as a preparatory
step for improving the tools for managing 3rd-party source codes.
The rationale is described in the issue referenced below.
Issue #1082