This patch integrates three region maps into each PD session to
reduce the session overhead and to simplify the PD creation procedure.
Please refer to the issue cited below for an elaborative discussion.
Note the API change:
With this patch, the semantics of core's RM service have changed. Now,
the service is merely a tool for creating and destroying managed
dataspaces, which are rarely needed. Regular components no longer need a
RM session. For this reason, the corresponding argument for the
'Process' and 'Child' constructors has been removed.
The former interface of the 'Rm_session' is not named 'Region_map'. As a
minor refinement, the 'Fault_type' enum values are now part of the
'Region_map::State' struct.
Issue #1938
The return code of assign_parent remained unused. So this patch
removes it.
The bind_thread function fails only due to platform-specific limitations
such as the exhaustion of ID name spaces, which cannot be sensibly
handled by the PD-session client. If occurred, such conditions used to
be reflected by integer return codes that were used for diagnostic
messages only. The patch removes the return codes and leaves the
diagnostic output to core.
Fixes#1842
Besides unifying the Msgbuf_base classes across all platforms, this
patch merges the Ipc_marshaller functionality into Msgbuf_base, which
leads to several further simplifications. For example, this patch
eventually moves the Native_connection_state and removes all state
from the former Ipc_server to the actual server loop, which not only
makes the flow of control and information much more obvious, but is
also more flexible. I.e., on NOVA, we don't even have the notion of
reply-and-wait. Now, we are no longer forced to pretend otherwise.
Issue #1832
This patch unifies the CPU session interface across all platforms. The
former differences are moved to respective "native-CPU" interfaces.
NOVA is not covered by the patch and still relies on a custom version of
the core-internal 'cpu_session_component.h'. However, this will soon be
removed once the ongoing rework of pause/single-step on NOVA is
completed.
Fixes#1922
This patch removes the dynamically growing slab allocator from the
page-table registry. This has two benefits. First, we alleviate the
corner cases where the slab allocator needed to extend its backing store
while establishing a core-local memory mapping, thereby triggering a
nested core-local mapping. Without this corner case, no reentrant lock
is needed any longer. Second, it removes the dependency from the overly
large old API of the slab allocator. So we can tighten the slab
interface.
This commit introduces the new `Component` interface in the form of the
headers base/component.h and base/entrypoint.h. The os/server.h API
has become merely a compatibilty wrapper and will eventually be removed.
The same holds true for os/signal_rpc_dispatcher.h. The mechanism has
moved to base/signal.h and is now called 'Signal_handler'.
Since the patch shuffles headers around, please do a 'make clean' in the
build directory.
Issue #1832
This commit replaces the stateful 'Ipc_client' type with the plain
function 'ipc_call' that takes all the needed state as arguments.
The stateful 'Ipc_server' class is retained but it moved from the public
API to the internal ipc_server.h header. The kernel-specific
implementations were cleaned up and simplified. E.g., the 'wait'
function does no longer exist. The badge and exception code are no
longer carried in the message buffers but are handled in kernel-specific
ways.
Issue #610
Issue #1832
This patch moves details about the stack allocation and organization
the base-internal headers. Thereby, I replaced the notion of "thread
contexts" by "stacks" as this term is much more intuitive. The fact that
we place thread-specific information at the bottom of the stack is not
worth introducing new terminology.
Issue #1832
On seL4 and L4/Fiasco, we employ a simple yielding spinlock as lock
implementation. Consequently these base platforms used to have a
simplified header. However, since the regular cancelable_lock has all
the member variables needed to implement a spinlock, we can simply use
the generic header on those two platforms too, just leaving some other
parts of the generic header unused. So at API level, the difference is
not visible.
Issue #1832
By moving the stub implementation to rm_session_client.cc, we can use
the generic base/include/rm_session/client.h for base-linux and
base-nova and merely use platform-specific implementations.
Issue #1832
This patch establishes a common organization of header files
internal to the base framework. The internal headers are located at
'<repository>/src/include/base/internal/'. This structure has been
choosen to make the nature of those headers immediately clear when
included:
#include <base/internal/lock_helper.h>
Issue #1832
This patch integrates the functionality of the former CAP session into
the PD session and unifies the approch of supplementing the generic PD
session with kernel-specific functionality. The latter is achieved by
the new 'Native_pd' interface. The kernel-specific interface can be
obtained via the Pd_session::native_pd accessor function. The
kernel-specific interfaces are named Nova_native_pd, Foc_native_pd, and
Linux_native_pd.
The latter change allowed for to deduplication of the
pd_session_component code among the various base platforms.
To retain API compatibility, we keep the 'Cap_session' and
'Cap_connection' around. But those classes have become mere wrappers
around the PD session interface.
Issue #1841
This patch removes the SIGNAL service from core and moves its
functionality to the PD session. Furthermore, it unifies the PD service
implementation and terminology across the various base platforms.
Issue #1841
This is the default optimization level in the original seL4 SDK. By
adapting to O3, we work around a bug [1] in version 2.1.0 that only
shows on low optimization levels.
[1] https://github.com/seL4/seL4/issues/20
Previously, ports that were needed for a scenario and that were not
prepared or outdated, triggered one assertion each during the second
build stage. The commit slots a mechanism in ahead that gathers all
these ports during the first build stage and reports them in form of a
list before the second build stage is entered. This list can be used
directly as argument for tool/ports/prepare_port to prepare respectively
update the ports. If, however, this mechanism is not available, for
example because a target is build without the first build stage, the old
assertion still prevents the target from running into troubles with a
missing port.
Fixes#1872
This patch updates seL4 from the experimental branch of one year ago to
the master branch of version 2.1. The transition has the following
implications.
In contrast to the experimental branch, the master branch has no way to
manually define the allocation of kernel objects within untyped memory
ranges. Instead, the kernel maintains a built-in allocation policy. This
policy rules out the deallocation of once-used parts of untyped memory.
The only way to reuse memory is to revoke the entire untyped memory
range. Consequently, we cannot share a large untyped memory range for
kernel objects of different protection domains. In order to reuse memory
at a reasonably fine granularity, we need to split the initial untyped
memory ranges into small chunks that can be individually revoked. Those
chunks are called "untyped pages". An untyped page is a 4 KiB untyped
memory region.
The bootstrapping of core has to employ a two-stage allocation approach
now. For creating the initial kernel objects for core, which remain
static during the entire lifetime of the system, kernel objects are
created directly out of the initial untyped memory regions as reported
by the kernel. The so-called "initial untyped pool" keeps track of the
consumption of those untyped memory ranges by mimicking the kernel's
internal allocation policy. Kernel objects created this way can be of
any size. For example the phys CNode, which is used to store page-frame
capabilities is 16 MiB in size. Also, core's CSpace uses a relatively
large CNode.
After the initial setup phase, all remaining untyped memory is turned
into untyped pages. From this point on, new created kernel objects
cannot exceed 4 KiB in size because one kernel object cannot span
multiple untyped memory regions. The capability selectors for untyped
pages are organized similarly to those of page-frame capabilities. There
is a new 2nd-level CNode (UNTYPED_CORE_CNODE) that is dimensioned
according to the maximum amount of physical memory (1M entries, each
entry representing 4 KiB). The CNode is organized such that an index
into the CNode directly corresponds to the physical frame number of the
underlying memory. This way, we can easily determine a untyped page
selector for any physical addresses, i.e., for revoking the kernel
objects allocated at a specific physical page. The downside is the need
for another 16 MiB chunk of meta data. Also, we need to keep in mind
that this approach won't scale to 64-bit systems. We will eventually
need to replace the PHYS_CORE_CNODE and UNTYPED_CORE_CNODE by CNode
hierarchies to model a sparsely populated CNode.
The size constrain of kernel objects has the immediate implication that
the VM CSpaces of protection domains must be organized via several
levels of CNodes. I.e., as the top-level CNode of core has a size of
2^12, the remaining 20 PD-specific CSpace address bits are organized as
a 2nd-level 2^4 padding CNode, a 3rd-level 2^8 CNode, and several
4th-level 2^8 leaf CNodes. The latter contain the actual selectors for
the page tables and page-table entries of the respective PD.
As another slight difference from the experimental branch, the master
branch requires the explicit assignment of page directories to an ASID
pool.
Besides the adjustment to the new seL4 version, the patch introduces a
dedicated type for capability selectors. Previously, we just used to
represent them as unsigned integer values, which became increasingly
confusing. The new type 'Cap_sel' is a PD-local capability selector. The
type 'Cnode_index' is an index into a CNode (which is not generally not
the entire CSpace of the PD).
Fixes#1887
* Move the Synced_interface from os -> base
* Align the naming of "synchronized" helpers to "Synced_*"
* Move Synced_range_allocator to core's private headers
* Remove the raw() and lock() members from Synced_allocator and
Synced_range_allocator, and re-use the Synced_interface for them
* Make core's Mapped_mem_allocator a friend class of Synced_range_allocator
to enable the needed "unsafe" access of its physical and virtual allocators
Fix#1697
Instead of holding SPEC-variable dependent files and directories inline
within the repository structure, move them into 'spec' subdirectories
at the corresponding levels, e.g.:
repos/base/include/spec
repos/base/mk/spec
repos/base/lib/mk/spec
repos/base/src/core/spec
...
Moreover, this commit removes the 'platform' directories. That term was
used in an overloaded sense. All SPEC-relative 'platform' directories are
now named 'spec'. Other files, like for instance those related to the
kernel/architecture specific startup library, where moved from 'platform'
directories to explicit, more meaningful places like e.g.: 'src/lib/startup'.
Fix#1673
Instead of returning pointers to locked objects via a lookup function,
the new object pool implementation restricts object access to
functors resp. lambda expressions that are applied to the objects
within the pool itself.
Fix#884Fix#1658
For most platforms except of NOVA a distinction between pager entrypoint
and pager activation is not needed, and only exists due to historical
reasons. Moreover, the pager thread's execution path is almost identical
between most platforms excluding NOVA, HW, and Fisco.OC. Therefore,
this commit unifies the pager loop for the other platforms, and removes
the pager activation class.
This commit eliminates the mutual interlaced taking of destruction lock,
list lock and weak pointer locks that could lead to a dead-lock situation
when a lock pointer was tried to construct while a weak object is in
destruction progress.
Now, all weak pointers are invalidated and dequeued at the very
beginning of the weak object's destruction. Moreover, before a weak pointer
gets invalidated during destruction of a weak object, it gets dequeued, and
the list lock is freed again to avoid the former dead-lock.
Fix#1607
This patch enable clients of core's TRACE service to obtain the
execution times of trace subjects (i.e., threads). The execution time is
delivered as part of the 'Subject_info' structure.
Right now, the feature is available solely on NOVA. On all other base
platforms, the returned execution times are 0.
Issue #813
This patch installs the parent endpoint selector and the PD's CNode into
a PD at its creation time. Furthermore, it initializes the IPC buffer
for the main thread of the new component.
This allows us to see debug messages printed at the eary initialization
of init (before init is able to obtain the regular LOG session). This
will be reverted as soon as the initialziation of the non-core base
environment works.
To build core and other Genode components, we will need to extend the
base-common.mk library with additions that conflict with the
minimalistic root-task environment of test/sel4. To preserve the
minimalistic root task, we need to decouple it from the base-common
library.