The new 'select_from_ports' function allows a target description file to
query the path to an installed port. All ports are stored in a central
location specified as CONTRIB_DIR. By default, CONTRIB_DIR is defined
as '<genode-dir>/contrib'. Ports of 3rd-party source code are managed
using the tools at '<genode-dir>/tool/ports/'.
Issue #1082
This patch changes the top-level directory layout as a preparatory
step for improving the tools for managing 3rd-party source codes.
The rationale is described in the issue referenced below.
Issue #1082
This patch avoids the construction of the Genode::Config object in Noux
processes. The construction of this object would populate the Noux
process with additional capabilities, which cannot be handled by
'fork()'.
The old implementation of sleep_forever() used a local Ipc_server
object, which is not announced (i.e., known) outside of the blocking
process/thread, to infinitely wait for incoming messages. In past and
present, this leads to problems (e.g., issues #538 and #1032).
Fixes#1135.
Fixes#538.
Fixes#1032.
Use the libc Mem_alloc implementation per MMTYP of virtualbox. With this the
invariant that all memory allocation of a MMTYP are dense located.
Fixes#1130
Instead of mapping all physical memory 1:1 into core/kernel's address space,
this commit limits the 1:1 mapping to the binary image, and I/O memory
regions used by the kernel only. All subsequent memory accesses of core
are done by mapping the corresponding memory on demand, and not necessarily
1:1.
This commit has several side effects:
The page table code had to be revisited completely. The kernel inserts no
longer anything into the page tables, apart from the initial translations
to have the core/kernel image available when enabling the MMU. The page
tables and higher level translation tables are no longer named Tlb, but
Translation_table instead. There is no indirection class required to define
the translation tables of a concrete SoC, the appropriated ARM specifier
is sufficient.
The ability to map core's memory the same way like it's done for all other
protection domains, makes a special treatment of core's threads (no context
area) obsolete.
Ref #567 (partly solves it)
Fix#723Fix#1068
Removes the generic processor broadcast function call. By now, that call
was used for cross processor TLB maintance operations only. When core/kernel
gets its memory mapped on demand, and unmapped again, the previous cross
processor flush routine doesn't work anymore, because of a hen-egg problem.
The previous cross processor broadcast is realized using a thread constructed
by core running on top of each processor core. When constructing threads in
core, a dataspace for its thread context is constructed. Each constructed
RAM dataspace gets attached, zeroed out, and detached again. The detach
routine requires a TLB flush operation executed on each processor core.
Instead of executing a thread on each processor core, now a thread waiting
for a global TLB flush is removed from the scheduler queue, and gets attached
to a TLB flush queue of each processor. The processor local queue gets checked
whenever the kernel is entered. The last processor, which executed the TLB
flush, re-attaches the blocked thread to its scheduler queue again.
To ease uo the above described mechanism, a platform thread is now directly
associated with a platform pd object, instead of just associate it with the
kernel pd's id.
Ref #723
It covers bugs which we should detect and fix, especially depending on
the result of pthread_myself locking implementation (ours and vbox) takes
decision to take a lock or just to assume it is a reentrant locking attempt.
Fixes#1128
The pointer-report facility used to report the screen-absolute position
of the mouse pointer. For nitpicker clients, however, this position is
meaningless because their coordinate is always constrained to the area
below the menu bar. This patch offsets the reported position
accordingly.
The rm_session quota of the context area is not sufficient to start more
then ~95 threads per address space. If one really needs so many threads per
address space one could increase the quota or dynamically respond to it using
the Expanding_rm_session class. Until now there is no need to support so much
threads per address space.
Issue #1122