This patch introduces new types for expressing CPU affinities. Instead
of dealing with physical CPU numbers, affinities are expressed as
rectangles in a grid of virtual CPU nodes. This clears the way to
conveniently assign sets of adjacent CPUs to subsystems, each of them
managing their respective viewport of the coordinate space.
By using 2D Cartesian coordinates, the locality of CPU nodes can be
modeled for different topologies such as SMP (simple Nx1 grid), grids of
NUMA nodes, or ring topologies.
Genode used to create new processes by directly forking from the
respective Genode parent using the process library. The forking process
created a PD session at core merely for propagating the PID of the new
process into core (for later destruction). This traditional mechanisms
has the following disadvantages:
First, the PID reported by the creating process to core cannot easily be
validated by core. Therefore core has to trust the PD client to not
specify a PID of an existing process, which would happen to be killed
once the PD session gets destructed. This problem is documented by
issue #318. Second, there is no way for a Genode process to detect the
failure of its any grandchildren. The immediate parent of a faulting
process could use the SIGCHLD-and-waitpid mechanism to observe its
children but this mechanism does not work transitively.
By performing the process creation exclusively within core, all Genode
processes become immediate child processes of core. Hence, core can
respond to failures of any of those processes and reflect such
conditions via core's session interfaces. Furthermore, the PID
associated to a PD session is locally known within core and cannot be
forged anymore. In fact, there is actually no need at all to make
processes aware of any PIDs of other processes.
Please note that this patch breaks the 'chroot' mechanism that comes in
the form of the 'os/src/app/chroot' program. Because all processes are
forked from core, a chroot'ed process could sneak outside its chroot
environment by just creating a new Genode process. To address this
issue, the chroot mechanism must be added to core.
This patch alleviates the need for any non-core process to create Unix
domain sockets locally. All sockets used for RPC communication are
created by core and subsequently passed to the other processes via RPC
or the parent interface. The immediate benefit is that no process other
than core needs to access the 'rpath' directory in order to communicate.
However, access to 'rpath' is still needed for accessing dataspaces.
Core creates one socket pair per thread on demand on the first call of
the 'Linux_cpu_session::server_sd()' or 'Linux_cpu_session::client_sd()'
functions. 'Linux_cpu_session' is a Linux-specific extension to the CPU
session interface. In addition to the socket accessors, the extension
provides a mechanism to register the PID/TID of a thread. Those
information were formerly propagated into core along with the thread
name as argument to 'create_thread()'.
Because core creates socket pairs for entrypoints, it needs to know all
threads that are potential entrypoints. For lx_hybrid programs, we
hadn't had propagated any thread information into core, yet. Hence, this
patch also contains the code for registering threads of hybrid
applications at core.