E.g., the script manually tried to configure lighttpd and used the old
porting directory structure that was integrated in the repos. For the
same reason it produced compile errors with noux packages.
Ref #2398
We do not have debug symbols in the bin/ binaries anymore. Thus, use the
debug/ binaries instead. We now have kernel specific binary names for
ld.lib.so and some others. Adapt, to this fact as well. For doing so
without unnecessary output, provide a new parameter "silent" at the
"kernel_specific_binary" procedure.
Ref #2398
A dataspace capability request to a ROM service may invalidate any
previously issued dataspace. Therefor no requests should be made while a
session dataspace is mapped. Reducing calls to the session also improves
performance where servicing a ROM request has a significant cost.
Fix#2418
There are programs, e.g. curl, that check if a connection was
established successfully by looking at SO_ERROR. Pretend that
the getsockopt() call was executed to keep them happy. If they
try to use a broken connection, the other socket functions will
bail.
This patch changes the noux build rules to produce a tar archive in
'bin/', alleviating the need for this step from the run scripts.
This way, the visible result of a built noux package is a single (tar)
file in '<build-dir>bin/', which is suited for the use as a ROM module.
The 'Stack_area_ram_session' is now a 'Stack_area_ram_allocator', which
simplifies the code and remove a dependency from the 'Ram_session'
interface, which we want to remove after all.
Issue #2407
By supplying a statically allocated initial block to the slab allocator
for signal contexts, we become able to construct a 'Signal_broker' (the
back end for the PD's signalling API) without any dynamic memory
allocation. This is a precondition for using the PD as meta-data
allocator for its contained signal broker (meta data allocations must
not happen before the PD construction is complete).
Issue #2407
If a child is allowed to constrain physical memory allocations but left
the 'phys_start' and 'phys_size' session arguments blank, init applies
builtin constraints for allocating DMA buffers.
The only component that makes use of the physical-memory constraint
feature is the platform driver. Since the built-in heuristics are
applied to the platform driver's environment RAM session, all
allocations performed by the platform driver satisfy the DMA
constraints.
To justify building-in these heuristics into init as opposed to
supplying the values as configuration arguments, the values differ
between 32 and 64 bit. The configuration approach would raise the need
to differentiate init configurations for both cases, which are
completely identical otherwise.
Issue #2407
This commit removes support for limitation of RAM allocations from the
platform_drv. A subsequent commit adds this feature to init.
Issue #2398
Issue #2407
By separating the session-interface concerns from the mechanics of the
dataspace creation, the code becomes simpler to follow, and the RAM
session can be more easily merged with the PD session in a subsequent
step.
Issue #2407
This patch allows core's 'Signal_transmitter' implementation to sidestep
the 'Env::Pd' interface and thereby adhere to a stricter layering within
core. The 'Signal_transmitter' now uses - on kernels that depend on it -
a dedicated (and fairly freestanding) RPC proxy mechanism for signal
deliver, instead of channeling signals through the 'Pd_session::submit'
RPC function.
With the capability-quota mechanism, the terminal-session won't always
be constructed completely on the first try (we may run out of caps in
the middle of the construction). Therefore, all members of the object
must be properly destructable. Furthermore, the patch replaces the
sliced heap by a heap to avoid allocating a new dataspace for each line
of the cell array.
Previously, the Genode::Timer::curr_time always used the
Timer_session::elapsed_ms RPC as back end. Now, Genode::Timer reads
this remote time only in a periodic fashion independently from the calls
to Genode::Timer::curr_time. If now one calls Genode::Timer::curr_time,
the function takes the last read remote time value and adapts it using
the timestamp difference since the remote-time read. The conversion
factor from timestamps to time is estimated on every remote-time read
using the last read remote-time value and the timestamp difference since
the last remote time read.
This commit also re-works the timeout test. The test now has two stages.
In the first stage, it tests fast polling of the
Genode::Timer::curr_time. This stage checks the error between locally
interpolated and timer-driver time as well as wether the locally
interpolated time is monotone and sufficiently homogeneous. In the
second stage several periodic and one-shot timeouts are scheduled at
once. This stage checks if the timeouts trigger sufficiently precise.
This commit adds the new Kernel::time syscall to base-hw. The syscall is
solely used by the Genode::Timer on base-hw as substitute for the
timestamp. This is because on ARM, the timestamp function uses the ARM
performance counter that stops counting when the WFI (wait for
interrupt) instruction is active. This instruction, however is used by
the base-hw idle contexts that get active when no user thread needs to
be scheduled. Thus, the ARM performance counter is not a good choice for
time interpolation and we use the kernel internal time instead.
With this commit, the timeout library becomes a basic library. That means
that it is linked against the LDSO which then provides it to the program it
serves. Furthermore, you can't use the timeout library anymore without the
LDSO because through the kernel-dependent LDSO make-files we can achieve a
kernel-dependent timeout implementation.
This commit introduces a structured Duration type that shall successively
replace the use of Microseconds, Milliseconds, and integer types for duration
values.
Open issues:
* The timeout test fails on Raspberry PI because of precision errors in the
first stage. However, this does not render the framework unusable in general
on the RPI but merely is an issue when speaking of microseconds precision.
* If we run on ARM with another Kernel than HW the timestamp speed may
continuously vary from almost 0 up to CPU speed. The Timer, however,
only uses interpolation if the timestamp speed remained stable (12.5%
tolerance) for at least 3 observation periods. Currently, one period is
100ms, so its 300ms. As long as this is not the case,
Timer_session::elapsed_ms is called instead.
Anyway, it might happen that the CPU load was stable for some time so
interpolation becomes active and now the timestamp speed drops. In the
worst case, we would now have 100ms of slowed down time. The bad thing
about it would be, that this also affects the timeout of the period.
Thus, it might "freeze" the local time for more than 100ms.
On the other hand, if the timestamp speed suddenly raises after some
stable time, interpolated time can get too fast. This would shorten the
period but nonetheless may result in drifting away into the far future.
Now we would have the problem that we can't deliver the real time
anymore until it has caught up because the output of Timer::curr_time
shall be monotone. So, effectively local time might "freeze" again for
more than 100ms.
It would be a solution to not use the Trace::timestamp on ARM w/o HW but
a function whose return value causes the Timer to never use
interpolation because of its stability policy.
Fixes#2400