Remove signal context object from signal source component list (_signal_queue)
before destruction, otherwise we get a dangling pointer.
On native hardware for base-nova, the signal source thread triggered page
faults in the Signal_source_component::wait_for_signal() method when the signal
context got freed up in Signal_session_component::free_context but was still
enqueued in Signal_source_component::_signal_queue.
Fixes#600
Add functionality to lookup an object and lock it. Additional the case is
handled that a object may be already in-destruction and the lookup will deny
returning the object.
The object_pool generalize the lookup and lock functionality of the rpc_server
and serve as base for following up patches to fix dangling pointer issues.
The CPU session interfaces comes with the ability to install an
exception handler per thread. This patch enhances the feature with the
provision of a default signal handler that is used if no thread-specific
handler is installed. The default signal handler can be set by
specifying an invalid thread capability and a valid signal context
capability.
Furthermore, this patch relaxes the requirement of the order of the
calls of 'exception_handler' and 'set_pager'. Originally, the exception
handler could be installed not before setting a pager. Now, we remember
the installed exception handler in the 'Cpu_thread' and propagate to to
the platform thread at a later time.
If no platform thread was created before somebody destroys a thread object,
there is no valid UTCB available. Thereby, we've to check this before accessing
the UTCB when destroying a thread object.
The generic 'sleep_forever()' function creates an Ipc_server object which
might not get cleaned up correctly when the thread gets destroyed and
unneeded capability references could remain and drain the capability index
allocator. With this patch a lock gets used on thread exit instead of
calling 'sleep_forever()'.
Fixes#538.
This patch reflects eventual allocation errors in a more specific way to
the caller of 'alloc_aligned', in particular out-of-metadata and
out-of-memory are considered as different conditions.
Related to issue #526.
This patch introduces clean synchronization between the entrypoint
thread and the caller of the 'Rpc_entrypoint' destructor. The most
important change is the handling of the 'Ipc_server' destruction. This
object is in the local scope of the server's entry function. However,
since the server loop used to be an infinite loop, there was hardly any
chance to destruct the object in a clean way. Hence, the
'Rpc_entrypoint' destructor used to explicitly call '~Ipc_server'.
Unfortunately, this approach led to problems because there are indeed
rare cases where the server thread leaves the scope of the entry
function, namely uncaught exceptions. In such a case, the destructor
would have been called twice.
With the new protocol, we make sure to leave the scope of the entry
function and thereby destroy the 'Ipc_server' object as expected. This
is achieved by propagating the exit condition through a local RPC call
to the entrypoint. This way, the blocking state of the entrypoint
becomes unblocked. Furthermore, '~Rpc_entrypoint' makes use of the new
'join' function to wait for the completion of the server thread.
On Linux, we want to attach additional attributes to processes, i.e.,
the chroot location, the designated UID, and GID. Instead of polluting
the generic code with such Linux-specific platform details, I introduced
the new 'Native_pd_args' type, which can be customized for each
platform. The platform-dependent policy of init is factored out in the
new 'pd_args' library.
The new 'base-linux/run/lx_pd_args.run' script can be used to validate
the propagation of those attributes into core.
Note that this patch does not add the interpretation of the new UID and
PID attributes by core. This will be subject of a follow-up patch.
Related to #510.
Using the new 'join()' function, the caller can explicitly block for the
completion of the thread's 'entry()' function. The test case for this
feature can be found at 'os/src/test/thread_join'. For hybrid
Linux/Genode programs, the 'Thread_base::join()' does not map directly
to 'pthread_join'. The latter function gets already called by the
destructor of 'Thread_base'. According to the documentation, subsequent
calls of 'pthread_join' for one thread may result in undefined behaviour.
So we use a 'Genode::Lock' on this platform, which is in line with the
other platforms.
Related to #194, #501
Implies support for the ARMv6 architecture through 'base-hw'.
Get rid of 'base/include/drivers' expect of 'base/include/drivers/uart'.
Merge with the support for trustzone on VEA9X4 that came from
Stefan Kalkowski.
Leave board drivers in 'base/include/platform'.
Rework structure of the other drivers that were moved to
'base_hw/src/core' and those that came with the trustzone support.
Beautify further stuff in 'base_hw'.
Test 'nested_init' with 'hw_imx31' (hardware) and 'hw_panda_a2' (hardware),
'demo' and 'signal' with 'hw_pbxa9' (qemu) and 'hw_vea9x4'
(hardware, no trustzone), and 'vmm' with 'hw_vea9x4'
(hardware, with trustzone).
When building the Fiasco.OC kernel, and L4Linux within the Genode build system,
forward the CC, and CXX variables. It might contain useful tools like ccache,
or distcc to speed up compilation. Moreover, don't delete the MAKEFLAGS when
building Fiasco.OC. It hinders parallel builds.
Replacing the local name of a capability index object which exists in the
capability map can destroy the AVL tree order of the capability map. With
this patch the outdated object gets removed from the map and a new object
gets inserted afterwards.
Fixes#435.
By now all services in core where created, and registered in the generic
main routine. Although there exists already a x86-specific service (I/O ports)
there was no possibility to announce core-services for certain platforms only.
This commit introduces a hook function in the 'Platform' class, that enables
registration of platform-specific services. Moreover, the io-port service
is offered on x86 platforms only now.
Implement shared IRQs using 'Irq_proxy' class.
Nova: Added global worker 'Irq_thread' support in core and adapted Irq_session.
FOC: Adapted IRQ session code, x86 has shared IRQ support, ARM uses the old
model. Read and set 'mode' argument (from MADT) in 'Irq_session'.
OKL4: Use generic 'Irq_proxy'
Fixes issue #390
The alternative weighted scheduler might lead to some threads don't make
any progress anymore (take for example the signal test). So we have to use
the fixed priority scheduler also in the kernel configuration for 64 Bit.
In sigma0 normally no answer tag to a request/fault is created. It simply uses
the message tag received with the request. This doesn't work out when I/O ports
are requested. This patch constructs an appropriate answer tag. Moreover,
we have to enable I/O port protection in the kernel configuration.
This patch introduces the functions 'affinity' and 'num_cpus' to the CPU
session interface. The interface extension will allow the assignment of
individual threads to CPUs. At this point, it is just a stub with no
actual platform support.
The Cap_mapping abstraction in core shouldn't use a Cap_index directly, but
use Native_capability instead, as it can break reference-counting, as long as
the same Cap_index gets used in a Cap_mapping and a Native_capability. This
commit finally fixes#208.
This commit fixes several issues that were triggered e.g. by the
'noux_tool_chain' run-script (fix#208 in part). The following problems
are tackled:
* Don't reference count capability selectors within a task that are actually
controlled by core (all beneath 0x200000), because it's undecideable which
"version" of a capability selector we currently use, e.g. a thread gets
destroyed and a new one gets created immediately some other thread might
have a Native_capability pointing to the already destroyed thread's gate
capability-slot, that is now a new valid one (the one of the new thread)
* In core we cannot invalidate and remove a capability from the so called
Cap_map before each reference to it is destroyed, so don't do this in
Cap_session_component::free, but only reference-decrement within there,
the actual removal can only be done in Cap_map::remove. Because core also
has to invalidate a capability to be removed in all protection-domains
we have to implement a core specific Cap_map::remove method
* When a capability gets inserted into the Cap_map, and we detect an old
invalid entry with the dame id in the tree, don't just overmap that
invalid entry (as there exist remaining references to it), but just remove
it from the tree and allocate an new entry.
* Use the Cap_session_component interface to free a Pager_object when it
gets dissolved, as its also used for allocation
Let the Fiasco.OC base platform succeed the cap_integrity run-script meaning
that it is not feasible anymore to fake a capability by using a valid one
together with a guessed local_name.
Eliminate prints to stderr for normal messages, because it leads to exceptional
returns in TCL-scripts e.g. when run-script is triggered by the autopilot even
if the script's return code itself will be zero.
This patch extends the RAM session interface with the ability to
allocate DMA buffers. The client specifies the type of RAM dataspace to
allocate via the new 'cached' argument of the 'Ram_session::alloc()'
function. By default, 'cached' is true, which correponds to the common
case and the original behavior. When setting 'cached' to 'false', core
takes the precautions needed to register the memory as uncached in the
page table of each process that has the dataspace attached.
Currently, the support for allocating DMA buffers is implemented for
Fiasco.OC only. On x86 platforms, it is generally not needed. But on
platforms with more relaxed cache coherence (such as ARM), user-level
device drivers should always use uncacheable memory for DMA transactions.
When sigma0 runs on a lower priority than the rest of the threads in the
system it might come to the point that while answering a page fault or
I/O memory area request the timeslice of the caller (core-pager) gets
fully consumed. As long as other threads are still executable and don't block
sigma0 won't do progress anymore, because it runs at the lowest priority.
This commit simply sets sigma0's priority to the highest in the system.
When invoking the bootstrap build in the L4RE build-system to create
a single elf-image containing all needed files to boot a scenario, don't
use the 'ENTRY' variable, but 'E' variable instead. Otherwise 'ENTRY'
might get overridden (dependent on the make-version). Moreover, using
'E' seems to be the way L4Re is expecting it has to be invoked.
Fixes#226
When core requests all RAM from sigma0 it normally unmaps page 0 so that
null-pointer dereferences are detected by a pagefault. The unmap syscall
in the Fiasco.OC base platform was used insufficiently in this particular
case.
Introduce process global spin-lock for Cap_index's reference-counter
to avoid non-atomic increment/decrement of the counter. Here, we don't
use a static Spinlock object, because it's constructor wouldn't be
initialized before used for the first time.
The following fixes partly solve the problems triggered by the noux stress
test introduced by nfeske in issue #208.
* The check whether a capability exists in the Cap_map, and its insertion,
has to be done atomically
* While removing a capability it is looked up in the Cap_map via its id,
check whether the found capability pointer is the same like the looked up,
otherwise the wrong capability gets freed
* When a local capability is un- resp. marshalled, only the local pointer
gets transfered, not the redundant capability id
* Introduce several assertions and warnings to facilitate debugging
This patch increases the size of the JDB kernel object names buffer. The
original size was too small for some Genode scenarios and caused missing
thread names in the kernel debugger thread list.
Fixes#191.