The rumpkernel based tools are intended to be used by executing
'tool/rump'. Since it covers the most common use cases for these
tools, this script is comparatively extensive, hence giving a short
tutorial seems reasonable:
* Format a disk image with Ext2:
To format a disk image with the Ext2 file system, first prepare the
actual image by executing dd:
! dd if=/dev/zero of=/path/to/disk_image bs=1M count=128
Second, use 'tool/rump' to format the disk image:
! rump -f -F ext2fs /path/to/disk_image
Afterwards the just created file system may be populated with the
content of another directory by executing
! rump -F ext2fs -p /path/to/another_dir /path/to/disk_image
The content of the file system image can be listed by executing
! rump -F ext2fs -l /path/to/disk_image
* Create a encrypted disk image:
Creating a cryptographic disk image based on cgd(4) is done by
executing the following command:
! rump -c /path/to/disk_image
This will generate a key that may be used to decrypt the image
later on. Since this command will _only_ generate a key and NOT
initialize the disk image, it is highly advised to prepare the disk
image by using '/dev/urandom' instead of '/dev/zero' (only new blocks
that will be written to the disk image are encrypted). In addition
while generating the key a temporary configuration file will be
created. Although this file has proper permissions, it may leak the
generated key if it is created on persistent storage. To specify a more
secure directory the '-t' option should be used:
! rump -c -t /path/to/secure/directory /path/to/disk_image
Decrypting the disk image requires the key generated in the previous
step:
! rump -c -k <key> /path/to/disk_image
For now this key has to specified as command line argument. This is
an issue if the shell, which is used, is maintaing a history of
executed commands.
For completness sake let us put all examples together by creating a
encrypted Ext2 image that will contain all files of Genode's _demo_
scenario:
! dd if=/dev/urandom of=/tmp/demo.img bs=1M count=16
! $(GENODE_DIR)/tool/rump -c -t /ramfs -F ext2fs /tmp/demo.img > \
! /ramfs/key # key is printed out to stdout
! $(GENODE_DIR)/tool/rump -c -t /ramfs -F ext2fs -k <key> \
! -p $(BUILD_DIR)/var/run/demo /tmp/demo.img
To check if the image was populated succesfully, execute the
following:
! $(GENODE_DIR)/tool/rump -c -t /ramfs -F ext2fs -k <key> -l \
! /tmp/demo.img
The rumpkernel tools are used within the Genode OS Framework tool chain
for preparing and populating disk images as well as creating cgd(4)
based cryptographic disk devices.
Execute 'tool/tool_chain_rump build' to build the tools and afterwards
'tool/tool_chain_rump install' to install the binaries. The default
install location is _/usr/local/genode-rump_.
On ARM in one way or another 'string.h' prototypes will be used. Move
the definitions from rump_fs to the rump library because it is needed
by all rump based servers running on ARM.
Issue #1141.
Use _italic_ for path names rather than 'verbatim'. Because path names
tend to be quite long, the overly use of verbatim makes paragraphs hard
to read.
The new 'select_from_ports' function allows a target description file to
query the path to an installed port. All ports are stored in a central
location specified as CONTRIB_DIR. By default, CONTRIB_DIR is defined
as '<genode-dir>/contrib'. Ports of 3rd-party source code are managed
using the tools at '<genode-dir>/tool/ports/'.
Issue #1082
This patch changes the top-level directory layout as a preparatory
step for improving the tools for managing 3rd-party source codes.
The rationale is described in the issue referenced below.
Issue #1082
This patch avoids the construction of the Genode::Config object in Noux
processes. The construction of this object would populate the Noux
process with additional capabilities, which cannot be handled by
'fork()'.
The old implementation of sleep_forever() used a local Ipc_server
object, which is not announced (i.e., known) outside of the blocking
process/thread, to infinitely wait for incoming messages. In past and
present, this leads to problems (e.g., issues #538 and #1032).
Fixes#1135.
Fixes#538.
Fixes#1032.
Use the libc Mem_alloc implementation per MMTYP of virtualbox. With this the
invariant that all memory allocation of a MMTYP are dense located.
Fixes#1130
Instead of mapping all physical memory 1:1 into core/kernel's address space,
this commit limits the 1:1 mapping to the binary image, and I/O memory
regions used by the kernel only. All subsequent memory accesses of core
are done by mapping the corresponding memory on demand, and not necessarily
1:1.
This commit has several side effects:
The page table code had to be revisited completely. The kernel inserts no
longer anything into the page tables, apart from the initial translations
to have the core/kernel image available when enabling the MMU. The page
tables and higher level translation tables are no longer named Tlb, but
Translation_table instead. There is no indirection class required to define
the translation tables of a concrete SoC, the appropriated ARM specifier
is sufficient.
The ability to map core's memory the same way like it's done for all other
protection domains, makes a special treatment of core's threads (no context
area) obsolete.
Ref #567 (partly solves it)
Fix#723Fix#1068
Removes the generic processor broadcast function call. By now, that call
was used for cross processor TLB maintance operations only. When core/kernel
gets its memory mapped on demand, and unmapped again, the previous cross
processor flush routine doesn't work anymore, because of a hen-egg problem.
The previous cross processor broadcast is realized using a thread constructed
by core running on top of each processor core. When constructing threads in
core, a dataspace for its thread context is constructed. Each constructed
RAM dataspace gets attached, zeroed out, and detached again. The detach
routine requires a TLB flush operation executed on each processor core.
Instead of executing a thread on each processor core, now a thread waiting
for a global TLB flush is removed from the scheduler queue, and gets attached
to a TLB flush queue of each processor. The processor local queue gets checked
whenever the kernel is entered. The last processor, which executed the TLB
flush, re-attaches the blocked thread to its scheduler queue again.
To ease uo the above described mechanism, a platform thread is now directly
associated with a platform pd object, instead of just associate it with the
kernel pd's id.
Ref #723