Open a capability receive window according to the number of the capabilities
expected as out parameter per RPC function.
Typically the number of capabilities expected during the reply of a RPC/IPC
call is 0 to 1. Before this patch ever a capability receive window of 4 has
been opened.
On Nova the capability selectors of receive windows must be naturally aligned
to the size/order of the expected capabilities. This leads until now to the
issue that the left over 3 capabilities couldn't be reused for new IPCs since
they are not naturally aligned to 4.
Issue #905
If a local thread is attempted to be 'pause'd via cpu_session, don't wait
until it gets into the recalled state. If the caller is lucky it is, if not
return only the stack pointer.
Avoids deadlocking of the gdb when attached to a process running a server.
Issue #478
The 'pause' call on base-nova assumes that a thread can solely block in its
associated semaphore. Main reason is that so core can unblock a thread in order
that the recall exception gets delivered and the register state can be
obtained.
Unfortunately the signal session implementation creates a semaphore, which is
unknown by the pager code. Instead create the semaphore via the pager of the
thread, so that the pager can unblock the signal thread when a pause is issued.
Issue #478
If a thread caused a page fault and later on get be paused, then it left
the recall handler immediately due to the pause call instead of staying
in this handler.
Add some (complicated) state machine to detect and handle the case. Still not
waterproof, especially server threads may never get recalled if they never get
a IPC from the outside.
Fixes#478
This patch introduces new types for expressing CPU affinities. Instead
of dealing with physical CPU numbers, affinities are expressed as
rectangles in a grid of virtual CPU nodes. This clears the way to
conveniently assign sets of adjacent CPUs to subsystems, each of them
managing their respective viewport of the coordinate space.
By using 2D Cartesian coordinates, the locality of CPU nodes can be
modeled for different topologies such as SMP (simple Nx1 grid), grids of
NUMA nodes, or ring topologies.
r3 contains the recent Nova upstream kernel version plus the Genode specific
extensions and changes as known from r2.
Additionally, the r3 branch
* contains the assign_pci patch now directly,
* adds support for cross CPU IPC,
* fixes some issues with freeing up kernel memory part of r2 and
* update the documentation a bit.
Fixes#814
Revoke the right to set the portal id (aka label) when it is not needed
anymore. Otherwise everybody in the system having a mapping of the portal can
reset the label to something we don't expect.
Issue #667
The distinction between 'ipc.h' and 'ipc_generic.h' is no more. The only
use case for platform-specific extensions of the IPC support was the
marshalling of capabilities. However, this case is accommodated by a
function interface ('_marshal_capability', '_unmarshal_capability'). By
moving the implementation of these functions from the headers into the
respective ipc libraries, we can abandon the platform-specific 'ipc.h'
headers.
The cleanup call must be performed already during the _dissolve function
shortly after the object at the cap_session is freed up. Otherwise there
is the chance that an in-flight IPC will find the to be dissolved function
again.
Bomb test triggered the case, that a already dissolved rpc_object was found
by a in-flight IPC. If the rpc_object was already freed up by alloc->destroy
the thread using this stale rpc_object pointer cause page-faults in core.
Fixes partly #549
If we ran out of capabilities indexes, the bit allocator throws an exception.
If this happens the code seems to hang and nothing happens.
Instead one could catch the exception and print some diagnostic message.
This would be nice, but don't work. Printing some diagnostic message itself
tries to do potentially IPC and will allocate new capability indexes at
least for the receive window.
So, catch the exception and let the thread die, so at least the instruction
pointer is left as trace to identify the reason of the trouble.
Fixes#625
On Linux, we want to attach additional attributes to processes, i.e.,
the chroot location, the designated UID, and GID. Instead of polluting
the generic code with such Linux-specific platform details, I introduced
the new 'Native_pd_args' type, which can be customized for each
platform. The platform-dependent policy of init is factored out in the
new 'pd_args' library.
The new 'base-linux/run/lx_pd_args.run' script can be used to validate
the propagation of those attributes into core.
Note that this patch does not add the interpretation of the new UID and
PID attributes by core. This will be subject of a follow-up patch.
Related to #510.
Extend tracking of delegated and of translated items. The additional
information is used to solely free up unused/unwanted mapped capabilities and
to avoid unnecessary revokes on capability indexes where nothing have been
received.
Fixes#430
Unify handling of UTCBs. The utcb of the main thread is with commit
ea38aad30e at a fixed location - per convention.
So we can remove all the ugly code to transfer the utcb address during process
creation.
To do so also the UTCB of the main thread of Core must be inside Genode's
thread context area to handle it the same way. Unfortunately the UTCB of the
main thread of Core can't be chosen, it is defined by the kernel.
Possible solutions:
- make virtual address of first thread UTCB configurable in hypervisor
- map the utcb of the first thread inside Core to the desired location
This commit implements the second option.
Kernel patch: make utcb map-able
With the patch the Utcb of the main thread of Core is map-able.
Fixes#374
Noux actually uses the sp variable during thread creation and expects to be
set accordingly. This wasn't the case for the main thread, it was ever set
to the address of the main thread UTCB.
Move the context area close to the end of the virtual user available address,
so that Vancouver can obtain as much as possible of the lower virtual address
range for VMs.
This patch introduces the functions 'affinity' and 'num_cpus' to the CPU
session interface. The interface extension will allow the assignment of
individual threads to CPUs. At this point, it is just a stub with no
actual platform support.
The cpu_session interface fails to be virtualized by gdb_monitor because
platform-nova uses an extended nova_cpu_session interface.
The problem was that threads have been created directly at core without
knowledge of gdb_monitor. This lead to the situation that gdb_monitor didn't
know of all threads to be debugged.
Tunnel the additional parameters required on base-nova through the state()
call of the cpu_session interface before the thread actual is started.