When using the Allocator interface, one can't tell which alignment
resulting allocations fulfill. However, at least on ARM, given the
architectural alignment requirements of ARM memory accesses, one wants
memory allocations (what allocators are for in most cases) to be word
aligned automatically. Previously, at least the AVL allocator simply
called alloc_aligned without defining align in its alloc implementation.
This led to unaligned access faults (the default was 0) when using the
AVL allocator as Allocator (as done in the metadata management of a SLAB
of an AVL that uses the AVL as backing store). To avoid such pitfalls
in the future, we force users of alloc_aligned to always specify align
(why use alloc_aligned without align anyway).
Ref #1941
Besides unifying the Msgbuf_base classes across all platforms, this
patch merges the Ipc_marshaller functionality into Msgbuf_base, which
leads to several further simplifications. For example, this patch
eventually moves the Native_connection_state and removes all state
from the former Ipc_server to the actual server loop, which not only
makes the flow of control and information much more obvious, but is
also more flexible. I.e., on NOVA, we don't even have the notion of
reply-and-wait. Now, we are no longer forced to pretend otherwise.
Issue #1832
This patch changes the organization of the slab blocks within the slab
allocator. Originally, blocks were kept in a list sorted by the number
of free entries. However, it turned out that the maintenance of this
invariant involves a lot of overhead in the presence of a large number
of blocks. The new implementation manages blocks within a ring in no
particular order and maintains a pointer to the block where the next
allocation is attempted. This alleviates the need for sorting blocks
when allocating and deallocating.
Fixes#1908
This patch ensures that the 'Allocator_avl' releases all memory obtained
from the meta-data allocator at destruction time. If allocations are
still dangling, it produces a warning, hinting at possible memory leaks.
Finally, it properly reverts all 'add_range' operations.
This patch makes sure that the dataspace pool is flushed before
destructing the heap-local allocator-avl instance. With the original
destruction order, the allocator would still contain dangling
allocations on the account of the dataspace pool when destructed. In
practice, this caused no problem because the underlying backing store is
eventually freed on the destruction of the pool. But it triggers a
runtime warning of the allocator since it has become more strict with
regard to dangling allocations.
This commit introduces the new `Component` interface in the form of the
headers base/component.h and base/entrypoint.h. The os/server.h API
has become merely a compatibilty wrapper and will eventually be removed.
The same holds true for os/signal_rpc_dispatcher.h. The mechanism has
moved to base/signal.h and is now called 'Signal_handler'.
Since the patch shuffles headers around, please do a 'make clean' in the
build directory.
Issue #1832
This commit replaces the stateful 'Ipc_client' type with the plain
function 'ipc_call' that takes all the needed state as arguments.
The stateful 'Ipc_server' class is retained but it moved from the public
API to the internal ipc_server.h header. The kernel-specific
implementations were cleaned up and simplified. E.g., the 'wait'
function does no longer exist. The badge and exception code are no
longer carried in the message buffers but are handled in kernel-specific
ways.
Issue #610
Issue #1832
This patch moves details about the stack allocation and organization
the base-internal headers. Thereby, I replaced the notion of "thread
contexts" by "stacks" as this term is much more intuitive. The fact that
we place thread-specific information at the bottom of the stack is not
worth introducing new terminology.
Issue #1832
This patch integrates the functionality of the former CAP session into
the PD session and unifies the approch of supplementing the generic PD
session with kernel-specific functionality. The latter is achieved by
the new 'Native_pd' interface. The kernel-specific interface can be
obtained via the Pd_session::native_pd accessor function. The
kernel-specific interfaces are named Nova_native_pd, Foc_native_pd, and
Linux_native_pd.
The latter change allowed for to deduplication of the
pd_session_component code among the various base platforms.
To retain API compatibility, we keep the 'Cap_session' and
'Cap_connection' around. But those classes have become mere wrappers
around the PD session interface.
Issue #1841
This patch removes the SIGNAL service from core and moves its
functionality to the PD session. Furthermore, it unifies the PD service
implementation and terminology across the various base platforms.
Issue #1841
When unblocking a thread in Semaphore::up() while holding the fifo meta-data
lock, it might happen that the lock holder gets destroyed by the one it was
unblocking. This happened for instance in the pthread test in the past, where
thread destruction was synchronized via a semaphore. There is no need to hold
the lock during the unblock operation, so we should do it outside the critical
section.
Fix#1333
'block_for_signal' and 'pending_signal' now set pending flag in signal context
in order to determine pending signal. The context list is also used by the
'Signal_receiver' during destruction.
Fixes#1738
Currently, when a signal arrives in the main thread, the signal dispatcher is
retrieved and called from the main thread, the dispatcher uses a proxy object
that in turn sends an RPC to the entry point. This becomes a problem when the
entry point destroys the dispatcher object, before the dispatch function has
been called by the main thread. Therefore, the main thread should simply send an
RPC to the entry point upon signal arrival and the dispatching should be handled
solely by the entry point.
Issue #1738
Holding the object pool's lock while trying to obtain an object's lock
can leave to dead-lock situations, when more than one thread tries to
access multiple objects at once (e.g.: when transfer_quota gets called
simultanously by the init and entrypoint thread in core). To circumvent
holding the object pool lock too long, but access object pointers safely
on the other hand, this commit updates the object pool implementation
to use weak pointers during the object retrieval.
Fix#1704
* Move the Synced_interface from os -> base
* Align the naming of "synchronized" helpers to "Synced_*"
* Move Synced_range_allocator to core's private headers
* Remove the raw() and lock() members from Synced_allocator and
Synced_range_allocator, and re-use the Synced_interface for them
* Make core's Mapped_mem_allocator a friend class of Synced_range_allocator
to enable the needed "unsafe" access of its physical and virtual allocators
Fix#1697
Instead of returning pointers to locked objects via a lookup function,
the new object pool implementation restricts object access to
functors resp. lambda expressions that are applied to the objects
within the pool itself.
Fix#884Fix#1658
This commit eliminates the mutual interlaced taking of destruction lock,
list lock and weak pointer locks that could lead to a dead-lock situation
when a lock pointer was tried to construct while a weak object is in
destruction progress.
Now, all weak pointers are invalidated and dequeued at the very
beginning of the weak object's destruction. Moreover, before a weak pointer
gets invalidated during destruction of a weak object, it gets dequeued, and
the list lock is freed again to avoid the former dead-lock.
Fix#1607
Up to now it was not possible to trace threads that use a different
Cpu_session rather than env()->cpu_session() (as done by VirtualBox).
This problem is now solved by setting the Cpu_session explicitly when
creating the event logger and attaching the trace control area when
creating the thread.
Fixes#1618.
The recent change of the TRACE session interface triggered the
following warning:
/home/no/src/genode/repos/base/include/base/ipc.h:79:4: warning: ‘ret’ may be used uninitialized in this function [-Wmaybe-uninitialized]
*reinterpret_cast<T *>(&_sndbuf[_write_offset]) = value;
^
In file included from /home/no/src/genode/repos/base/src/core/include/trace/session_component.h:19:0,
from /home/no/src/genode/repos/base/src/core/trace_session_component.cc:15:
/home/no/src/genode/repos/base/include/base/rpc_server.h:132:42: note: ‘ret’ was declared here
typename This_rpc_function::Ret_type ret;
The warning occurs for basic return types (like size_t), which are
indeed not initialized. The variable gets its value assigned by the
corresponding 'call_member' overload, to which the variable is passed as
reference. But the compiler apparently is not able to detect this assignment.
Declaring 'ret' with a C++11-style default initializer fixes the warning.
This patch enable clients of core's TRACE service to obtain the
execution times of trace subjects (i.e., threads). The execution time is
delivered as part of the 'Subject_info' structure.
Right now, the feature is available solely on NOVA. On all other base
platforms, the returned execution times are 0.
Issue #813
Currently, libc_noux includes the 'base/src/base/env/platform_env.h' file
to be able to reinitialize the environment using the 'Platform_env'
interface. For base-linux, a special version of this file exists and the
inclusion of the generic version in libc_noux causes GCC 4.9 to make wrong
assumptions about the memory layout of the 'Env' object returned by
'Genode::env()'.
This commit moves the reinitialization functions to the 'Env' interface to
avoid the need to include the 'platform_env.h' file in libc_noux.
Fixes#1510
If a null-terminated string exactly of length MAX (0 byte included) is
provided, it will be handled as invalid because of wrong string size length
checks.
Commit fixes this.
Discovered during #1486 development.
Physical CPU quota was previously given to a thread on construction only
by directly specifying a percentage of the quota of the according CPU
session. Now, a new thread is given a weighting that can be any value.
The physical counter-value of such a weighting depends on the weightings
of the other threads at the CPU session. Thus, the physical quota of all
threads of a CPU session must be updated when a weighting is added or
removed. This is each time the session creates or destroys a thread.
This commit also adapts the "cpu_quota" test in base-hw accordingly.
Ref #1464
This patch adds const qualifiers to the functions Allocator::consumed,
Allocator::overhead, Allocator::avail, and Range_allocator::valid_addr.
Fixes#1481
* Instead of using local capabilities within core's context area implementation
for stack allocation/attachment, simply do both operations while stack gets
attached, thereby getting rid of the local capabilities in generic code
* In base-hw the UTCB of core's main thread gets mapped directly instead of
constructing a dataspace component out of it and hand over its local
capability
* Remove local capability implementation from all platforms except Linux
Ref #1443