mirror of
https://github.com/genodelabs/genode.git
synced 2024-12-18 13:26:27 +00:00
doc: remove docs covered by the Genode books
Foster the Genode books as a single point of reference for Genode's documentation. E.g., the Getting-Started section of the "Genode Foundations" book has long obsoleted doc/getting_started.txt. This patch also remove long orphaned texts like gsoc_2012.txt. The approach described in the porting guide has now been replaced by the Goa SDK. The Genode books can be downloaded at the genode.org website. Like Genode, they are open source. All text is licensed as CC-BY-SA and can be found at https://github.com/nfeske/genode-manual Fixes #5393
This commit is contained in:
parent
979aaed52b
commit
7928597249
@ -1,517 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
=======================
|
|
||||||
The Genode build system
|
|
||||||
=======================
|
|
||||||
|
|
||||||
|
|
||||||
Norman Feske
|
|
||||||
|
|
||||||
Abstract
|
|
||||||
########
|
|
||||||
|
|
||||||
The Genode OS Framework comes with a custom build system that is designed for
|
|
||||||
the creation of highly modular and portable systems software. Understanding
|
|
||||||
its basic concepts is pivotal for using the full potential of the framework.
|
|
||||||
This document introduces those concepts and the best practises of putting them
|
|
||||||
to good use. Beside building software components from source code, common
|
|
||||||
and repetitive development tasks are the testing of individual components
|
|
||||||
and the integration of those components into complex system scenarios. To
|
|
||||||
streamline such tasks, the build system is accompanied with special tooling
|
|
||||||
support. This document introduces those tools.
|
|
||||||
|
|
||||||
|
|
||||||
Build directories and repositories
|
|
||||||
##################################
|
|
||||||
|
|
||||||
The build system is supposed to never touch the source tree. The procedure of
|
|
||||||
building components and integrating them into system scenarios is done at
|
|
||||||
a distinct build directory. One build directory targets a specific platform,
|
|
||||||
i.e., a kernel and hardware architecture. Because the source tree is decoupled
|
|
||||||
from the build directory, one source tree can have many different build
|
|
||||||
directories associated, each targeted at another platform.
|
|
||||||
|
|
||||||
The recommended way for creating a build directory is the use of the
|
|
||||||
'create_builddir' tool located at '<genode-dir>/tool/'. By starting the tool
|
|
||||||
without arguments, its usage information will be printed. For creating a new
|
|
||||||
build directory, one of the listed target platforms must be specified.
|
|
||||||
Furthermore, the location of the new build directory has to be specified via
|
|
||||||
the 'BUILD_DIR=' argument. For example:
|
|
||||||
|
|
||||||
! cd <genode-dir>
|
|
||||||
! ./tool/create_builddir linux_x86 BUILD_DIR=/tmp/build.linux_x86
|
|
||||||
|
|
||||||
This command will create a new build directory for the Linux/x86 platform
|
|
||||||
at _/tmp/build.linux_x86/_.
|
|
||||||
|
|
||||||
|
|
||||||
Build-directory configuration via 'build.conf'
|
|
||||||
==============================================
|
|
||||||
|
|
||||||
The fresh build directory will contain a 'Makefile', which is a symlink to
|
|
||||||
_tool/builddir/build.mk_. This makefile is the front end of the build system
|
|
||||||
and not supposed to be edited. Beside the makefile, there is a _etc/_
|
|
||||||
subdirectory that contains the build-directory configuration. For most
|
|
||||||
platforms, there is only a single _build.conf_ file, which defines the parts of
|
|
||||||
the Genode source tree incorporated in the build process. Those parts are
|
|
||||||
called _repositories_.
|
|
||||||
|
|
||||||
The repository concept allows for keeping the source code well separated for
|
|
||||||
different concerns. For example, the platform-specific code for each target
|
|
||||||
platform is located in a dedicated _base-<platform>_ repository. Also, different
|
|
||||||
abstraction levels and features of the system are residing in different
|
|
||||||
repositories. The _etc/build.conf_ file defines the set of repositories to
|
|
||||||
consider in the build process. At build time, the build system overlays the
|
|
||||||
directory structures of all repositories specified via the 'REPOSITORIES'
|
|
||||||
declaration to form a single logical source tree. By changing the list of
|
|
||||||
'REPOSITORIES', the view of the build system on the source tree can be altered.
|
|
||||||
The _etc/build.conf_ as found in a fresh created build directory will list the
|
|
||||||
_base-<platform>_ repository of the platform selected at the 'create_builddir'
|
|
||||||
command line as well as the 'base', 'os', and 'demo' repositories needed for
|
|
||||||
compiling Genode's default demonstration scenario. Furthermore, there are a
|
|
||||||
number of commented-out lines that can be uncommented for enabling additional
|
|
||||||
repositories.
|
|
||||||
|
|
||||||
Note that the order of the repositories listed in the 'REPOSITORIES' declaration
|
|
||||||
is important. Front-most repositories shadow subsequent repositories. This
|
|
||||||
makes the repository mechanism a powerful tool for tweaking existing repositories:
|
|
||||||
By adding a custom repository in front of another one, customized versions of
|
|
||||||
single files (e.g., header files or target description files) can be supplied to
|
|
||||||
the build system without changing the original repository.
|
|
||||||
|
|
||||||
|
|
||||||
Building targets
|
|
||||||
================
|
|
||||||
|
|
||||||
To build all targets contained in the list of 'REPOSITORIES' as defined in
|
|
||||||
_etc/build.conf_, simply issue 'make'. This way, all components that are
|
|
||||||
compatible with the build directory's base platform will be built. In practice,
|
|
||||||
however, only some of those components may be of interest. Hence, the build
|
|
||||||
can be tailored to those components which are of actual interest by specifying
|
|
||||||
source-code subtrees. For example, using the following command
|
|
||||||
! make core server/nitpicker
|
|
||||||
the build system builds all targets found in the 'core' and 'server/nitpicker'
|
|
||||||
source directories. You may specify any number of subtrees to the build
|
|
||||||
system. As indicated by the build output, the build system revisits
|
|
||||||
each library that is used by each target found in the specified subtrees.
|
|
||||||
This is very handy for developing libraries because instead of re-building
|
|
||||||
your library and then your library-using program, you just build your program
|
|
||||||
and that's it. This concept even works recursively, which means that libraries
|
|
||||||
may depend on other libraries.
|
|
||||||
|
|
||||||
In practice, you won't ever need to build the _whole tree_ but only the
|
|
||||||
targets that you are interested in.
|
|
||||||
|
|
||||||
|
|
||||||
Cleaning the build directory
|
|
||||||
============================
|
|
||||||
|
|
||||||
To remove all but kernel-related generated files, use
|
|
||||||
! make clean
|
|
||||||
|
|
||||||
To remove all generated files, use
|
|
||||||
! make cleanall
|
|
||||||
|
|
||||||
Both 'clean' and 'cleanall' won't remove any files from the _bin/_
|
|
||||||
subdirectory. This makes the _bin/_ a safe place for files that are
|
|
||||||
unrelated to the build process, yet required for the integration stage, e.g.,
|
|
||||||
binary data.
|
|
||||||
|
|
||||||
|
|
||||||
Controlling the verbosity of the build process
|
|
||||||
==============================================
|
|
||||||
|
|
||||||
To understand the inner workings of the build process in more detail, you can
|
|
||||||
tell the build system to display each directory change by specifying
|
|
||||||
|
|
||||||
! make VERBOSE_DIR=
|
|
||||||
|
|
||||||
If you are interested in the arguments that are passed to each invocation of
|
|
||||||
'make', you can make them visible via
|
|
||||||
|
|
||||||
! make VERBOSE_MK=
|
|
||||||
|
|
||||||
Furthermore, you can observe each single shell-command invocation by specifying
|
|
||||||
|
|
||||||
! make VERBOSE=
|
|
||||||
|
|
||||||
Of course, you can combine these verboseness toggles for maximizing the noise.
|
|
||||||
|
|
||||||
|
|
||||||
Enabling parallel builds
|
|
||||||
========================
|
|
||||||
|
|
||||||
To utilize multiple CPU cores during the build process, you may invoke 'make'
|
|
||||||
with the '-j' argument. If manually specifying this argument becomes an
|
|
||||||
inconvenience, you may add the following line to your _etc/build.conf_ file:
|
|
||||||
|
|
||||||
! MAKE += -j<N>
|
|
||||||
|
|
||||||
This way, the build system will always use '<N>' CPUs for building.
|
|
||||||
|
|
||||||
|
|
||||||
Caching inter-library dependencies
|
|
||||||
==================================
|
|
||||||
|
|
||||||
The build system allows to repeat the last build without performing any
|
|
||||||
library-dependency checks by using:
|
|
||||||
|
|
||||||
! make again
|
|
||||||
|
|
||||||
The use of this feature can significantly improve the work flow during
|
|
||||||
development because in contrast to source-codes, library dependencies rarely
|
|
||||||
change. So the time needed for re-creating inter-library dependencies at each
|
|
||||||
build can be saved.
|
|
||||||
|
|
||||||
|
|
||||||
Repository directory layout
|
|
||||||
###########################
|
|
||||||
|
|
||||||
Each Genode repository has the following layout:
|
|
||||||
|
|
||||||
Directory | Description
|
|
||||||
------------------------------------------------------------
|
|
||||||
'doc/' | Documentation, specific for the repository
|
|
||||||
------------------------------------------------------------
|
|
||||||
'etc/' | Default configuration of the build process
|
|
||||||
------------------------------------------------------------
|
|
||||||
'mk/' | The build system
|
|
||||||
------------------------------------------------------------
|
|
||||||
'include/' | Globally visible header files
|
|
||||||
------------------------------------------------------------
|
|
||||||
'src/' | Source codes and target build descriptions
|
|
||||||
------------------------------------------------------------
|
|
||||||
'lib/mk/' | Library build descriptions
|
|
||||||
|
|
||||||
|
|
||||||
Creating targets and libraries
|
|
||||||
##############################
|
|
||||||
|
|
||||||
Target descriptions
|
|
||||||
===================
|
|
||||||
|
|
||||||
A good starting point is to look at the init target. The source code of init is
|
|
||||||
located at _os/src/init/_. In this directory, you will find a target description
|
|
||||||
file named _target.mk_. This file contains the building instructions and it is
|
|
||||||
usually very simple. The build process is controlled by defining the following
|
|
||||||
variables.
|
|
||||||
|
|
||||||
|
|
||||||
Build variables to be defined by you
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
:'TARGET': is the name of the binary to be created. This is the
|
|
||||||
only *mandatory variable* to be defined in a _target.mk_ file.
|
|
||||||
|
|
||||||
:'REQUIRES': expresses the requirements that must be satisfied in order to
|
|
||||||
build the target. You find more details about the underlying mechanism in
|
|
||||||
Section [Specializations].
|
|
||||||
|
|
||||||
:'LIBS': is the list of libraries that are used by the target.
|
|
||||||
|
|
||||||
:'SRC_CC': contains the list of '.cc' source files. The default search location
|
|
||||||
for source codes is the directory, where the _target.mk_ file resides.
|
|
||||||
|
|
||||||
:'SRC_C': contains the list of '.c' source files.
|
|
||||||
|
|
||||||
:'SRC_S': contains the list of assembly '.s' source files.
|
|
||||||
|
|
||||||
:'SRC_BIN': contains binary data files to be linked to the target.
|
|
||||||
|
|
||||||
:'INC_DIR': is the list of include search locations. Directories should
|
|
||||||
always be appended by using +=. Never use an assignment!
|
|
||||||
|
|
||||||
:'EXT_OBJECTS': is a list of Genode-external objects or libraries. This
|
|
||||||
variable is mostly used for interfacing Genode with legacy software
|
|
||||||
components.
|
|
||||||
|
|
||||||
|
|
||||||
Rarely used variables
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
:'CC_OPT': contains additional compiler options to be used for '.c' as
|
|
||||||
well as for '.cc' files.
|
|
||||||
|
|
||||||
:'CC_CXX_OPT': contains additional compiler options to be used for the
|
|
||||||
C++ compiler only.
|
|
||||||
|
|
||||||
:'CC_C_OPT': contains additional compiler options to be used for the
|
|
||||||
C compiler only.
|
|
||||||
|
|
||||||
|
|
||||||
Specifying search locations
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
When specifying search locations for header files via the 'INC_DIR' variable or
|
|
||||||
for source files via 'vpath', relative pathnames are illegal to use. Instead,
|
|
||||||
you can use the following variables to reference locations within the
|
|
||||||
source-code repository, where your target lives:
|
|
||||||
|
|
||||||
:'REP_DIR': is the base directory of the current source-code repository.
|
|
||||||
Normally, specifying locations relative to the base of the repository is
|
|
||||||
never used by _target.mk_ files but needed by library descriptions.
|
|
||||||
|
|
||||||
:'PRG_DIR': is the directory, where your _target.mk_ file resides. This
|
|
||||||
variable is always to be used when specifying a relative path.
|
|
||||||
|
|
||||||
|
|
||||||
Library descriptions
|
|
||||||
====================
|
|
||||||
|
|
||||||
In contrast to target descriptions that are scattered across the whole source
|
|
||||||
tree, library descriptions are located at the central place _lib/mk_. Each
|
|
||||||
library corresponds to a _<libname>.mk_ file. The base of the description file
|
|
||||||
is the name of the library. Therefore, no 'TARGET' variable needs to be set.
|
|
||||||
The source-code locations are expressed as '$(REP_DIR)'-relative 'vpath'
|
|
||||||
commands.
|
|
||||||
|
|
||||||
Library-description files support the following additional declarations:
|
|
||||||
|
|
||||||
:'SHARED_LIB = yes': declares that the library should be built as a shared
|
|
||||||
object rather than a static library. The resulting object will be called
|
|
||||||
_<libname>.lib.so_.
|
|
||||||
|
|
||||||
|
|
||||||
Specializations
|
|
||||||
===============
|
|
||||||
|
|
||||||
Building components for different platforms likely implicates portions of code
|
|
||||||
that are tied to certain aspects of the target platform. For example, a target
|
|
||||||
platform may be characterized by
|
|
||||||
|
|
||||||
* A kernel API such as L4v2, Linux, L4.sec,
|
|
||||||
* A hardware architecture such as x86, ARM, Coldfire,
|
|
||||||
* A certain hardware facility such as a custom device, or
|
|
||||||
* Other properties such as software license requirements.
|
|
||||||
|
|
||||||
Each of these attributes express a specialization of the build process. The
|
|
||||||
build system provides a generic mechanism to handle such specializations.
|
|
||||||
|
|
||||||
The _programmer_ of a software component knows the properties on which his
|
|
||||||
software relies and thus, specifies these requirements in his build description
|
|
||||||
file.
|
|
||||||
|
|
||||||
The _user/customer/builder_ decides to build software for a specific platform
|
|
||||||
and defines the platform specifics via the 'SPECS' variable per build
|
|
||||||
directory in _etc/specs.conf_. In addition to an (optional) _etc/specs.conf_
|
|
||||||
file within the build directory, the build system incorporates the first
|
|
||||||
_etc/specs.conf_ file found in the repositories as configured for the
|
|
||||||
build directory. For example, for a 'linux_x86' build directory, the
|
|
||||||
_base-linux/etc/specs.conf_ file is used by default. The build directory's
|
|
||||||
'specs.conf' file can still be used to extend the 'SPECS' declarations, for
|
|
||||||
example to enable special features.
|
|
||||||
|
|
||||||
Each '<specname>' in the 'SPECS' variable instructs the build system to
|
|
||||||
|
|
||||||
* Include the 'make'-rules of a corresponding _base/mk/spec-<specname>.mk_
|
|
||||||
file. This enables the customization of the build process for each platform.
|
|
||||||
|
|
||||||
* Search for _<libname>.mk_ files in the _lib/mk/<specname>/_ subdirectory.
|
|
||||||
This way, we can provide alternative implementations of one and the same
|
|
||||||
library interface for different platforms.
|
|
||||||
|
|
||||||
Before a target or library gets built, the build system checks if the 'REQUIRES'
|
|
||||||
entries of the build description file are satisfied by entries of the 'SPECS'
|
|
||||||
variable. The compilation is executed only if each entry in the 'REQUIRES'
|
|
||||||
variable is present in the 'SPECS' variable as supplied by the build directory
|
|
||||||
configuration.
|
|
||||||
|
|
||||||
|
|
||||||
Building tools to be executed on the host platform
|
|
||||||
===================================================
|
|
||||||
|
|
||||||
Sometimes, software requires custom tools that are used to generate source
|
|
||||||
code or other ingredients for the build process, for example IDL compilers.
|
|
||||||
Such tools won't be executed on top of Genode but on the host platform
|
|
||||||
during the build process. Hence, they must be compiled with the tool chain
|
|
||||||
installed on the host, not the Genode tool chain.
|
|
||||||
|
|
||||||
The Genode build system accommodates the building of such host tools as a side
|
|
||||||
effect of building a library or a target. Even though it is possible to add
|
|
||||||
the tool compilation step to a regular build description file, it is
|
|
||||||
recommended to introduce a dedicated pseudo library for building such tools.
|
|
||||||
This way, the rules for building host tools are kept separate from rules that
|
|
||||||
refer to Genode programs. By convention, the pseudo library should be named
|
|
||||||
_<package>_host_tools_ and the host tools should be built at
|
|
||||||
_<build-dir>/tool/<package>/_. With _<package>_, we refer to the name of the
|
|
||||||
software package the tool belongs to, e.g., qt5 or mupdf. To build a tool
|
|
||||||
named _<tool>_, the pseudo library contains a custom make rule like the
|
|
||||||
following:
|
|
||||||
|
|
||||||
! $(BUILD_BASE_DIR)/tool/<package>/<tool>:
|
|
||||||
! $(MSG_BUILD)$(notdir $@)
|
|
||||||
! $(VERBOSE)mkdir -p $(dir $@)
|
|
||||||
! $(VERBOSE)...build commands...
|
|
||||||
|
|
||||||
To let the build system trigger the rule, add the custom target to the
|
|
||||||
'HOST_TOOLS' variable:
|
|
||||||
|
|
||||||
! HOST_TOOLS += $(BUILD_BASE_DIR)/tool/<package>/<tool>
|
|
||||||
|
|
||||||
Once the pseudo library for building the host tools is in place, it can be
|
|
||||||
referenced by each target or library that relies on the respective tools via
|
|
||||||
the 'LIBS' declaration. The tool can be invoked by referring to
|
|
||||||
'$(BUILD_BASE_DIR)/tool/<package>/tool'.
|
|
||||||
|
|
||||||
For an example of using custom host tools, please refer to the mupdf package
|
|
||||||
found within the libports repository. During the build of the mupdf library,
|
|
||||||
two custom tools fontdump and cmapdump are invoked. The tools are built via
|
|
||||||
the _lib/mk/mupdf_host_tools.mk_ library description file. The actual mupdf
|
|
||||||
library (_lib/mk/mupdf.mk_) has the pseudo library 'mupdf_host_tools' listed
|
|
||||||
in its 'LIBS' declaration and refers to the tools relative to
|
|
||||||
'$(BUILD_BASE_DIR)'.
|
|
||||||
|
|
||||||
|
|
||||||
Building additional custom targets accompanying library or program
|
|
||||||
==================================================================
|
|
||||||
|
|
||||||
There are cases when it is important to build additional targets
|
|
||||||
besides standard files built for library or program. Of course there
|
|
||||||
is no problem with writing specific make rules for commands that
|
|
||||||
generate those target files but for them to be built a proper
|
|
||||||
dependency must be specified. To achieve it those additional targets
|
|
||||||
should be added to 'CUSTOM_TARGET_DEPS' variable like e.g. in
|
|
||||||
iwl_firmware library from dde_linux repository:
|
|
||||||
|
|
||||||
! CUSTOM_TARGET_DEPS += $(addprefix $(BIN_DIR)/,$(IMAGES))
|
|
||||||
|
|
||||||
|
|
||||||
Automated integration and testing
|
|
||||||
#################################
|
|
||||||
|
|
||||||
Genode's cross-kernel portability is one of the prime features of the
|
|
||||||
framework. However, each kernel takes a different route when it comes to
|
|
||||||
configuring, integrating, and booting the system. Hence, for using a particular
|
|
||||||
kernel, profound knowledge about the boot concept and the kernel-specific tools
|
|
||||||
is required. To streamline the testing of Genode-based systems across the many
|
|
||||||
different supported kernels, the framework comes equipped with tools that
|
|
||||||
relieve you from these peculiarities.
|
|
||||||
|
|
||||||
Run scripts
|
|
||||||
===========
|
|
||||||
|
|
||||||
Using so-called run scripts, complete Genode systems can be described in a
|
|
||||||
concise and kernel-independent way. Once created, a run script can be used
|
|
||||||
to integrate and test-drive a system scenario directly from the build directory.
|
|
||||||
The best way to get acquainted with the concept is reviewing the run script
|
|
||||||
for the 'hello_tutorial' located at _hello_tutorial/run/hello.run_.
|
|
||||||
Let's revisit each step expressed in the _hello.run_ script:
|
|
||||||
|
|
||||||
* Building the components needed for the system using the 'build' command.
|
|
||||||
This command instructs the build system to compile the targets listed in
|
|
||||||
the brace block. It has the same effect as manually invoking 'make' with
|
|
||||||
the specified argument from within the build directory.
|
|
||||||
|
|
||||||
* Creating a new boot directory using the 'create_boot_directory' command.
|
|
||||||
The integration of the scenario is performed in a dedicated directory at
|
|
||||||
_<build-dir>/var/run/<run-script-name>/_. When the run script is finished,
|
|
||||||
this directory will contain all components of the final system. In the
|
|
||||||
following, we will refer to this directory as run directory.
|
|
||||||
|
|
||||||
* Installing the Genode 'config' file into the run directory using the
|
|
||||||
'install_config' command. The argument to this command will be written
|
|
||||||
to a file called 'config' at the run directory picked up by
|
|
||||||
Genode's init process.
|
|
||||||
|
|
||||||
* Creating a bootable system image using the 'build_boot_image' command.
|
|
||||||
This command copies the specified list of files from the _<build-dir>/bin/_
|
|
||||||
directory to the run directory and executes the platform-specific steps
|
|
||||||
needed to transform the content of the run directory into a bootable
|
|
||||||
form. This form depends on the actual base platform and may be an ISO
|
|
||||||
image or a bootable ELF image.
|
|
||||||
|
|
||||||
* Executing the system image using the 'run_genode_until' command. Depending
|
|
||||||
on the base platform, the system image will be executed using an emulator.
|
|
||||||
For most platforms, Qemu is the tool of choice used by default. On Linux,
|
|
||||||
the scenario is executed by starting 'core' directly from the run
|
|
||||||
directory. The 'run_genode_until' command takes a regular expression
|
|
||||||
as argument. If the log output of the scenario matches the specified
|
|
||||||
pattern, the 'run_genode_until' command returns. If specifying 'forever'
|
|
||||||
as argument (as done in 'hello.run'), this command will never return.
|
|
||||||
If a regular expression is specified, an additional argument determines
|
|
||||||
a timeout in seconds. If the regular expression does not match until
|
|
||||||
the timeout is reached, the run script will abort.
|
|
||||||
|
|
||||||
Please note that the _hello.run_ script does not contain kernel-specific
|
|
||||||
information. Therefore it can be executed from the build directory of any base
|
|
||||||
platform by using:
|
|
||||||
|
|
||||||
! make run/hello
|
|
||||||
|
|
||||||
When invoking 'make' with an argument of the form 'run/*', the build system
|
|
||||||
will look in all repositories for a run script with the specified name. The run
|
|
||||||
script must be located in one of the repositories 'run/' subdirectories and
|
|
||||||
have the file extension '.run'.
|
|
||||||
|
|
||||||
For a more comprehensive run script, _os/run/demo.run_ serves as a good
|
|
||||||
example. This run script describes Genode's default demo scenario. As seen in
|
|
||||||
'demo.run', parts of init's configuration can be made dependent on the
|
|
||||||
platform's properties expressed as spec values. For example, the PCI driver
|
|
||||||
gets included in init's configuration only on platforms with a PCI bus. For
|
|
||||||
appending conditional snippets to the _config_ file, there exists the 'append_if'
|
|
||||||
command, which takes a condition as first and the snippet as second argument.
|
|
||||||
To test for a SPEC value, the command '[have_spec <spec-value>]' is used as
|
|
||||||
condition. Analogously to how 'append_if' appends strings, there exists
|
|
||||||
'lappend_if' to append list items. The latter command is used to conditionally
|
|
||||||
include binaries to the list of boot modules passed to the 'build_boot_image'
|
|
||||||
command.
|
|
||||||
|
|
||||||
|
|
||||||
The run mechanism explained
|
|
||||||
===========================
|
|
||||||
|
|
||||||
Under the hood, run scripts are executed by an expect interpreter. When the
|
|
||||||
user invokes a run script via _make run/<run-script>_, the build system invokes
|
|
||||||
the run tool at _<genode-dir>/tool/run_ with the run script as argument. The
|
|
||||||
run tool is an expect script that has no other purpose than defining several
|
|
||||||
commands used by run scripts, including a platform-specific script snippet
|
|
||||||
called run environment ('env'), and finally including the actual run script.
|
|
||||||
Whereas _tool/run_ provides the implementations of generic and largely
|
|
||||||
platform-independent commands, the _env_ snippet included from the platform's
|
|
||||||
respective _base-<platform>/run/env_ file contains all platform-specific
|
|
||||||
commands. For reference, the most simplistic run environment is the one at
|
|
||||||
_base-linux/run/env_, which implements the 'create_boot_directory',
|
|
||||||
'install_config', 'build_boot_image', and 'run_genode_until' commands for Linux
|
|
||||||
as base platform. For the other platforms, the run environments are far more
|
|
||||||
elaborative and document precisely how the integration and boot concept works
|
|
||||||
on each platform. Hence, the _base-<platform>/run/env_ files are not only
|
|
||||||
necessary parts of Genode's tooling support but serve as resource for
|
|
||||||
peculiarities of using each kernel.
|
|
||||||
|
|
||||||
|
|
||||||
Using run script to implement test cases
|
|
||||||
========================================
|
|
||||||
|
|
||||||
Because run scripts are actually expect scripts, the whole arsenal of
|
|
||||||
language features of the Tcl scripting language is available to them. This
|
|
||||||
turns run scripts into powerful tools for the automated execution of test
|
|
||||||
cases. A good example is the run script at _libports/run/lwip.run_, which tests
|
|
||||||
the lwIP stack by running a simple Genode-based HTTP server on Qemu. It fetches
|
|
||||||
and validates a HTML page from this server. The run script makes use of a
|
|
||||||
regular expression as argument to the 'run_genode_until' command to detect the
|
|
||||||
state when the web server becomes ready, subsequently executes the 'lynx' shell
|
|
||||||
command to fetch the web site, and employs Tcl's support for regular
|
|
||||||
expressions to validate the result. The run script works across base platforms
|
|
||||||
that use Qemu as execution environment.
|
|
||||||
|
|
||||||
To get the most out of the run mechanism, a basic understanding of the Tcl
|
|
||||||
scripting language is required. Furthermore the functions provided by
|
|
||||||
_tool/run_ and _base-<platform>/run/env_ should be studied.
|
|
||||||
|
|
||||||
|
|
||||||
Automated testing across base platforms
|
|
||||||
=======================================
|
|
||||||
|
|
||||||
To execute one or multiple test cases on more than one base platform, there
|
|
||||||
exists a dedicated tool at _tool/autopilot_. Its primary purpose is the
|
|
||||||
nightly execution of test cases. The tool takes a list of platforms and of
|
|
||||||
run scripts as arguments and executes each run script on each platform. The
|
|
||||||
build directory for each platform is created at
|
|
||||||
_/tmp/autopilot.<username>/<platform>_ and the output of each run script is
|
|
||||||
written to a file called _<platform>.<run-script>.log_. On stderr, autopilot
|
|
||||||
prints the statistics about whether or not each run script executed
|
|
||||||
successfully on each platform. If at least one run script failed, autopilot
|
|
||||||
returns a non-zero exit code, which makes it straight forward to include
|
|
||||||
autopilot into an automated build-and-test environment.
|
|
||||||
|
|
||||||
|
|
514
doc/depot.txt
514
doc/depot.txt
@ -1,514 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
============================
|
|
||||||
Package management on Genode
|
|
||||||
============================
|
|
||||||
|
|
||||||
|
|
||||||
Norman Feske
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Motivation and inspiration
|
|
||||||
##########################
|
|
||||||
|
|
||||||
The established system-integration work flow with Genode is based on
|
|
||||||
the 'run' tool, which automates the building, configuration, integration,
|
|
||||||
and testing of Genode-based systems. Whereas the run tool succeeds in
|
|
||||||
overcoming the challenges that come with Genode's diversity of kernels and
|
|
||||||
supported hardware platforms, its scalability is somewhat limited to
|
|
||||||
appliance-like system scenarios: The result of the integration process is
|
|
||||||
a system image with a certain feature set. Whenever requirements change,
|
|
||||||
the system image is replaced with a new created image that takes those
|
|
||||||
requirements into account. In practice, there are two limitations of this
|
|
||||||
system-integration approach:
|
|
||||||
|
|
||||||
First, since the run tool implicitly builds all components required for a
|
|
||||||
system scenario, the system integrator has to compile all components from
|
|
||||||
source. E.g., if a system includes a component based on Qt5, one needs to
|
|
||||||
compile the entire Qt5 application framework, which induces significant
|
|
||||||
overhead to the actual system-integration tasks of composing and configuring
|
|
||||||
components.
|
|
||||||
|
|
||||||
Second, general-purpose systems tend to become too complex and diverse to be
|
|
||||||
treated as system images. When looking at commodity OSes, each installation
|
|
||||||
differs with respect to the installed set of applications, user preferences,
|
|
||||||
used device drivers and system preferences. A system based on the run tool's
|
|
||||||
work flow would require the user to customize the run script of the system for
|
|
||||||
each tweak. To stay up to date, the user would need to re-create the
|
|
||||||
system image from time to time while manually maintaining any customizations.
|
|
||||||
In practice, this is a burden, very few end users are willing to endure.
|
|
||||||
|
|
||||||
The primary goal of Genode's package management is to overcome these
|
|
||||||
scalability limitations, in particular:
|
|
||||||
|
|
||||||
* Alleviating the need to build everything that goes into system scenarios
|
|
||||||
from scratch,
|
|
||||||
* Facilitating modular system compositions while abstracting from technical
|
|
||||||
details,
|
|
||||||
* On-target system update and system development,
|
|
||||||
* Assuring the user that system updates are safe to apply by providing the
|
|
||||||
ability to easily roll back the system or parts thereof to previous versions,
|
|
||||||
* Securing the integrity of the deployed software,
|
|
||||||
* Fostering a federalistic evolution of Genode systems,
|
|
||||||
* Low friction for existing developers.
|
|
||||||
|
|
||||||
The design of Genode's package-management concept is largely influenced by Git
|
|
||||||
as well as the [https://nixos.org/nix/ - Nix] package manager. In particular
|
|
||||||
the latter opened our eyes to discover the potential that lies beyond the
|
|
||||||
package management employed in state-of-the art commodity systems. Even though
|
|
||||||
we considered adapting Nix for Genode and actually conducted intensive
|
|
||||||
experiments in this direction (thanks to Emery Hemingway who pushed forward
|
|
||||||
this line of work), we settled on a custom solution that leverages Genode's
|
|
||||||
holistic view on all levels of the operating system including the build system
|
|
||||||
and tooling, source structure, ABI design, framework API, system
|
|
||||||
configuration, inter-component interaction, and the components itself. Whereby
|
|
||||||
Nix is designed for being used on top of Linux, Genode's whole-systems view
|
|
||||||
led us to simplifications that eliminated the needs for Nix' powerful features
|
|
||||||
like its custom description language.
|
|
||||||
|
|
||||||
|
|
||||||
Nomenclature
|
|
||||||
############
|
|
||||||
|
|
||||||
When speaking about "package management", one has to clarify what a "package"
|
|
||||||
in the context of an operating system represents. Traditionally, a package
|
|
||||||
is the unit of delivery of a bunch of "dumb" files, usually wrapped up in
|
|
||||||
a compressed archive. A package may depend on the presence of other
|
|
||||||
packages. Thereby, a dependency graph is formed. To express how packages fit
|
|
||||||
with each other, a package is usually accompanied with meta data
|
|
||||||
(description). Depending on the package manager, package descriptions follow
|
|
||||||
certain formalisms (e.g., package-description language) and express
|
|
||||||
more-or-less complex concepts such as versioning schemes or the distinction
|
|
||||||
between hard and soft dependencies.
|
|
||||||
|
|
||||||
Genode's package management does not follow this notion of a "package".
|
|
||||||
Instead of subsuming all deliverable content under one term, we distinguish
|
|
||||||
different kinds of content, each in a tailored and simple form. To avoid the
|
|
||||||
clash of the notions of the common meaning of a "package", we speak of
|
|
||||||
"archives" as the basic unit of delivery. The following subsections introduce
|
|
||||||
the different categories.
|
|
||||||
Archives are named with their version as suffix, appended via a slash. The
|
|
||||||
suffix is maintained by the author of the archive. The recommended naming
|
|
||||||
scheme is the use of the release date as version suffix, e.g.,
|
|
||||||
'report_rom/2017-05-14'.
|
|
||||||
|
|
||||||
|
|
||||||
Raw-data archives
|
|
||||||
=================
|
|
||||||
|
|
||||||
A raw-data archive contains arbitrary data that is - in contrast to executable
|
|
||||||
binaries - independent from the processor architecture. Examples are
|
|
||||||
configuration data, game assets, images, or fonts. The content of raw-data
|
|
||||||
archives is expected to be consumed by components at runtime. It is not
|
|
||||||
relevant for the build process for executable binaries. Each raw-data
|
|
||||||
archive contains merely a collection of data files. There is no meta data.
|
|
||||||
|
|
||||||
|
|
||||||
API archive
|
|
||||||
===========
|
|
||||||
|
|
||||||
An API archive has the structure of a Genode source-code repository. It may
|
|
||||||
contain all the typical content of such a source-code repository such as header
|
|
||||||
files (in the _include/_ subdirectory), source codes (in the _src/_
|
|
||||||
subdirectory), library-description files (in the _lib/mk/_ subdirectory), or
|
|
||||||
ABI symbols (_lib/symbols/_ subdirectory). At the top level, a LICENSE file is
|
|
||||||
expected that clarifies the license of the contained source code. There is no
|
|
||||||
meta data contained in an API archive.
|
|
||||||
|
|
||||||
An API archive is meant to provide _ingredients_ for building components. The
|
|
||||||
canonical example is the public programming interface of a library (header
|
|
||||||
files) and the library's binary interface in the form of an ABI-symbols file.
|
|
||||||
One API archive may contain the interfaces of multiple libraries. For example,
|
|
||||||
the interfaces of libc and libm may be contained in a single "libc" API
|
|
||||||
archive because they are closely related to each other. Conversely, an API
|
|
||||||
archive may contain a single header file only. The granularity of those
|
|
||||||
archives may vary. But they have in common that they are used at build time
|
|
||||||
only, not at runtime.
|
|
||||||
|
|
||||||
|
|
||||||
Source archive
|
|
||||||
==============
|
|
||||||
|
|
||||||
Like an API archive, a source archive has the structure of a Genode
|
|
||||||
source-tree repository and is expected to contain all the typical content of
|
|
||||||
such a source repository along with a LICENSE file. But unlike an API archive,
|
|
||||||
it contains descriptions of actual build targets in the form of Genode's usual
|
|
||||||
'target.mk' files.
|
|
||||||
|
|
||||||
In addition to the source code, a source archive contains a file
|
|
||||||
called 'used_apis', which contains a list of API-archive names with each
|
|
||||||
name on a separate line. For example, the 'used_apis' file of the 'report_rom'
|
|
||||||
source archive looks as follows:
|
|
||||||
|
|
||||||
! base/2017-05-14
|
|
||||||
! os/2017-05-13
|
|
||||||
! report_session/2017-05-13
|
|
||||||
|
|
||||||
The 'used_apis' file declares the APIs needed to incorporate into the build
|
|
||||||
process when building the source archive. Hence, they represent _build-time_
|
|
||||||
_dependencies_ on the specific API versions.
|
|
||||||
|
|
||||||
A source archive may be equipped with a top-level file called 'api' containing
|
|
||||||
the name of exactly one API archive. If present, it declares that the source
|
|
||||||
archive _implements_ the specified API. For example, the 'libc/2017-05-14'
|
|
||||||
source archive contains the actual source code of the libc and libm as well as
|
|
||||||
an 'api' file with the content 'libc/2017-04-13'. The latter refers to the API
|
|
||||||
implemented by this version of the libc source package (note the differing
|
|
||||||
versions of the API and source archives)
|
|
||||||
|
|
||||||
|
|
||||||
Binary archive
|
|
||||||
==============
|
|
||||||
|
|
||||||
A binary archive contains the build result of the equally-named source archive
|
|
||||||
when built for a particular architecture. That is, all files that would appear
|
|
||||||
at the _<build-dir>/bin/_ subdirectory when building all targets present in
|
|
||||||
the source archive. There is no meta data present in a binary archive.
|
|
||||||
|
|
||||||
A binary archive is created out of the content of its corresponding source
|
|
||||||
archive and all API archives listed in the source archive's 'used_apis' file.
|
|
||||||
Note that since a binary archive depends on only one source archive, which
|
|
||||||
has no further dependencies, all binary archives can be built independently
|
|
||||||
from each other.
|
|
||||||
For example, a libc-using application needs the source code of the
|
|
||||||
application as well as the libc's API archive (the libc's header file and
|
|
||||||
ABI) but it does not need the actual libc library to be present.
|
|
||||||
|
|
||||||
|
|
||||||
Package archive
|
|
||||||
===============
|
|
||||||
|
|
||||||
A package archive contains an 'archives' file with a list of archive names
|
|
||||||
that belong together at runtime. Each listed archive appears on a separate line.
|
|
||||||
For example, the 'archives' file of the package archive for the window
|
|
||||||
manager 'wm/2018-02-26' looks as follows:
|
|
||||||
|
|
||||||
! genodelabs/raw/wm/2018-02-14
|
|
||||||
! genodelabs/src/wm/2018-02-26
|
|
||||||
! genodelabs/src/report_rom/2018-02-26
|
|
||||||
! genodelabs/src/decorator/2018-02-26
|
|
||||||
! genodelabs/src/floating_window_layouter/2018-02-26
|
|
||||||
|
|
||||||
In contrast to the list of 'used_apis' of a source archive, the content of
|
|
||||||
the 'archives' file denotes the origin of the respective archives
|
|
||||||
("genodelabs"), the archive type, followed by the versioned name of the
|
|
||||||
archive.
|
|
||||||
|
|
||||||
An 'archives' file may specify raw archives, source archives, or package
|
|
||||||
archives (as type 'pkg'). It thereby allows the expression of _runtime
|
|
||||||
dependencies_. If a package archive lists another package archive, it inherits
|
|
||||||
the content of the listed archive. This way, a new package archive may easily
|
|
||||||
customize an existing package archive.
|
|
||||||
|
|
||||||
A package archive does not specify binary archives directly as they differ
|
|
||||||
between the architecture and are already referenced by the source archives.
|
|
||||||
|
|
||||||
In addition to an 'archives' file, a package archive is expected to contain
|
|
||||||
a 'README' file explaining the purpose of the collection.
|
|
||||||
|
|
||||||
|
|
||||||
Depot structure
|
|
||||||
###############
|
|
||||||
|
|
||||||
Archives are stored within a directory tree called _depot/_. The depot
|
|
||||||
is structured as follows:
|
|
||||||
|
|
||||||
! <user>/pubkey
|
|
||||||
! <user>/download
|
|
||||||
! <user>/src/<name>/<version>/
|
|
||||||
! <user>/api/<name>/<version>/
|
|
||||||
! <user>/raw/<name>/<version>/
|
|
||||||
! <user>/pkg/<name>/<version>/
|
|
||||||
! <user>/bin/<arch>/<src-name>/<src-version>/
|
|
||||||
|
|
||||||
The <user> stands for the origin of the contained archives. For example, the
|
|
||||||
official archives provided by Genode Labs reside in a _genodelabs/_
|
|
||||||
subdirectory. Within this directory, there is a 'pubkey' file with the
|
|
||||||
user's public key that is used to verify the integrity of archives downloaded
|
|
||||||
from the user. The file 'download' specifies the download location as an URL.
|
|
||||||
|
|
||||||
Subsuming archives in a subdirectory that correspond to their origin
|
|
||||||
(user) serves two purposes. First, it provides a user-local name space for
|
|
||||||
versioning archives. E.g., there might be two versions of a
|
|
||||||
'nitpicker/2017-04-15' source archive, one by "genodelabs" and one by
|
|
||||||
"nfeske". However, since each version resides under its origin's subdirectory,
|
|
||||||
version-naming conflicts between different origins cannot happen. Second, by
|
|
||||||
allowing multiple archive origins in the depot side-by-side, package archives
|
|
||||||
may incorporate archives of different origins, which fosters the goal of a
|
|
||||||
federalistic development, where contributions of different origins can be
|
|
||||||
easily combined.
|
|
||||||
|
|
||||||
The actual archives are stored in the subdirectories named after the archive
|
|
||||||
types ('raw', 'api', 'src', 'bin', 'pkg'). Archives contained in the _bin/_
|
|
||||||
subdirectories are further subdivided in the various architectures (like
|
|
||||||
'x86_64', or 'arm_v7').
|
|
||||||
|
|
||||||
|
|
||||||
Depot management
|
|
||||||
################
|
|
||||||
|
|
||||||
The tools for managing the depot content reside under the _tool/depot/_
|
|
||||||
directory. When invoked without arguments, each tool prints a brief
|
|
||||||
description of the tool and its arguments.
|
|
||||||
|
|
||||||
Unless stated otherwise, the tools are able to consume any number of archives
|
|
||||||
as arguments. By default, they perform their work sequentially. This can be
|
|
||||||
changed by the '-j<N>' argument, where <N> denotes the desired level of
|
|
||||||
parallelization. For example, by specifying '-j4' to the _tool/depot/build_
|
|
||||||
tool, four concurrent jobs are executed during the creation of binary archives.
|
|
||||||
|
|
||||||
|
|
||||||
Downloading archives
|
|
||||||
====================
|
|
||||||
|
|
||||||
The depot can be populated with archives in two ways, either by creating
|
|
||||||
the content from locally available source codes as explained by Section
|
|
||||||
[Automated extraction of archives from the source tree], or by downloading
|
|
||||||
ready-to-use archives from a web server.
|
|
||||||
|
|
||||||
In order to download archives originating from a specific user, the depot's
|
|
||||||
corresponding user subdirectory must contain two files:
|
|
||||||
|
|
||||||
:_pubkey_: contains the public key of the GPG key pair used by the creator
|
|
||||||
(aka "user") of the to-be-downloaded archives for signing the archives. The
|
|
||||||
file contains the ASCII-armored version of the public key.
|
|
||||||
|
|
||||||
:_download_: contains the base URL of the web server where to fetch archives
|
|
||||||
from. The web server is expected to mirror the structure of the depot.
|
|
||||||
That is, the base URL is followed by a sub directory for the user,
|
|
||||||
which contains the archive-type-specific subdirectories.
|
|
||||||
|
|
||||||
If both the public key and the download locations are defined, the download
|
|
||||||
tool can be used as follows:
|
|
||||||
|
|
||||||
! ./tool/depot/download genodelabs/src/zlib/2018-01-10
|
|
||||||
|
|
||||||
The tool automatically downloads the specified archives and their
|
|
||||||
dependencies. For example, as the zlib depends on the libc API, the libc API
|
|
||||||
archive is downloaded as well. All archive types are accepted as arguments
|
|
||||||
including binary and package archives. Furthermore, it is possible to download
|
|
||||||
all binary archives referenced by a package archive. For example, the
|
|
||||||
following command downloads the window-manager (wm) package archive including
|
|
||||||
all binary archives for the 64-bit x86 architecture. Downloaded binary
|
|
||||||
archives are always accompanied with their corresponding source and used API
|
|
||||||
archives.
|
|
||||||
|
|
||||||
! ./tool/depot/download genodelabs/pkg/x86_64/wm/2018-02-26
|
|
||||||
|
|
||||||
Archive content is not downloaded directly to the depot. Instead, the
|
|
||||||
individual archives and signature files are downloaded to a quarantine area in
|
|
||||||
the form of a _public/_ directory located in the root of Genode's source tree.
|
|
||||||
As its name suggests, the _public/_ directory contains data that is imported
|
|
||||||
from or to-be exported to the public. The download tool populates it with the
|
|
||||||
downloaded archives in their compressed form accompanied with their
|
|
||||||
signatures.
|
|
||||||
|
|
||||||
The compressed archives are not extracted before their signature is checked
|
|
||||||
against the public key defined at _depot/<user>/pubkey_. If however the
|
|
||||||
signature is valid, the archive content is imported to the target destination
|
|
||||||
within the depot. This procedure ensures that depot content - whenever
|
|
||||||
downloaded - is blessed by a cryptographic signature of its creator.
|
|
||||||
|
|
||||||
|
|
||||||
Building binary archives from source archives
|
|
||||||
=============================================
|
|
||||||
|
|
||||||
With the depot populated with source and API archives, one can use the
|
|
||||||
_tool/depot/build_ tool to produce binary archives. The arguments have the
|
|
||||||
form '<user>/bin/<arch>/<src-name>' where '<arch>' stands for the targeted
|
|
||||||
CPU architecture. For example, the following command builds the 'zlib'
|
|
||||||
library for the 64-bit x86 architecture. It executes four concurrent jobs
|
|
||||||
during the build process.
|
|
||||||
|
|
||||||
! ./tool/depot/build genodelabs/bin/x86_64/zlib/2018-01-10 -j4
|
|
||||||
|
|
||||||
Note that the command expects a specific version of the source archive as
|
|
||||||
argument. The depot may contain several versions. So the user has to decide,
|
|
||||||
which one to build.
|
|
||||||
|
|
||||||
After the tool is finished, the freshly built binary archive can be found in
|
|
||||||
the depot within the _genodelabs/bin/<arch>/<src>/<version>/_ subdirectory.
|
|
||||||
Only the final result of the built process is preserved. In the example above,
|
|
||||||
that would be the _zlib.lib.so_ library.
|
|
||||||
|
|
||||||
For debugging purposes, it might be interesting to inspect the intermediate
|
|
||||||
state of the build. This is possible by adding 'KEEP_BUILD_DIR=1' as argument
|
|
||||||
to the build command. The binary's intermediate build directory can be
|
|
||||||
found besides the binary archive's location named with a '.build' suffix.
|
|
||||||
|
|
||||||
By default, the build tool won't attempt to rebuild a binary archive that is
|
|
||||||
already present in the depot. However, it is possible to force a rebuild via
|
|
||||||
the 'REBUILD=1' argument.
|
|
||||||
|
|
||||||
|
|
||||||
Publishing archives
|
|
||||||
===================
|
|
||||||
|
|
||||||
Archives located in the depot can be conveniently made available to the public
|
|
||||||
using the _tool/depot/publish_ tool. Given an archive path, the tool takes
|
|
||||||
care of determining all archives that are implicitly needed by the specified
|
|
||||||
one, wrapping the archive's content into compressed tar archives, and signing
|
|
||||||
those.
|
|
||||||
|
|
||||||
As a precondition, the tool requires you to possess the private key that
|
|
||||||
matches the _depot/<you>/pubkey_ file within your depot. The key pair should
|
|
||||||
be present in the key ring of your GNU privacy guard.
|
|
||||||
|
|
||||||
To publish archives, one needs to specify the specific version to publish.
|
|
||||||
For example:
|
|
||||||
|
|
||||||
! ./tool/depot/publish <you>/pkg/x86_64/wm/2018-02-26
|
|
||||||
|
|
||||||
The command checks that the specified archive and all dependencies are present
|
|
||||||
in the depot. It then proceeds with the archiving and signing operations. For
|
|
||||||
the latter, the pass phrase for your private key will be requested. The
|
|
||||||
publish tool prints the information about the processed archives, e.g.:
|
|
||||||
|
|
||||||
! publish /.../public/<you>/api/base/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/api/framebuffer_session/2017-05-31.tar.xz
|
|
||||||
! publish /.../public/<you>/api/gems/2018-01-28.tar.xz
|
|
||||||
! publish /.../public/<you>/api/input_session/2018-01-05.tar.xz
|
|
||||||
! publish /.../public/<you>/api/nitpicker_gfx/2018-01-05.tar.xz
|
|
||||||
! publish /.../public/<you>/api/nitpicker_session/2018-01-05.tar.xz
|
|
||||||
! publish /.../public/<you>/api/os/2018-02-13.tar.xz
|
|
||||||
! publish /.../public/<you>/api/report_session/2018-01-05.tar.xz
|
|
||||||
! publish /.../public/<you>/api/scout_gfx/2018-01-05.tar.xz
|
|
||||||
! publish /.../public/<you>/bin/x86_64/decorator/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/bin/x86_64/floating_window_layouter/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/bin/x86_64/report_rom/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/bin/x86_64/wm/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/pkg/wm/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/raw/wm/2018-02-14.tar.xz
|
|
||||||
! publish /.../public/<you>/src/decorator/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/src/floating_window_layouter/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/src/report_rom/2018-02-26.tar.xz
|
|
||||||
! publish /.../public/<you>/src/wm/2018-02-26.tar.xz
|
|
||||||
|
|
||||||
|
|
||||||
According to the output, the tool populates a directory called _public/_
|
|
||||||
at the root of the Genode source tree with the to-be-published archives.
|
|
||||||
The content of the _public/_ directory is now ready to be copied to a
|
|
||||||
web server, e.g., by using rsync.
|
|
||||||
|
|
||||||
|
|
||||||
Automated extraction of archives from the source tree
|
|
||||||
#####################################################
|
|
||||||
|
|
||||||
Genode users are expected to populate their local depot with content obtained
|
|
||||||
via the _tool/depot/download_ tool. However, Genode developers need a way to
|
|
||||||
create depot archives locally in order to make them available to users. Thanks
|
|
||||||
to the _tool/depot/extract_ tool, the assembly of archives does not need to be
|
|
||||||
a manual process. Instead, archives can be conveniently generated out of the
|
|
||||||
source codes present in the Genode source tree and the _contrib/_ directory.
|
|
||||||
|
|
||||||
However, the granularity of splitting source code into archives, the
|
|
||||||
definition of what a particular API entails, and the relationship between
|
|
||||||
archives must be augmented by the archive creator as this kind of information
|
|
||||||
is not present in the source tree as is. This is where so-called "archive
|
|
||||||
recipes" enter the picture. An archive recipe defines the content of an
|
|
||||||
archive. Such recipes can be located at an _recipes/_ subdirectory of any
|
|
||||||
source-code repository, similar to how port descriptions and run scripts
|
|
||||||
are organized. Each _recipe/_ directory contains subdirectories for the
|
|
||||||
archive types, which, in turn, contain a directory for each archive. The
|
|
||||||
latter is called a _recipe directory_.
|
|
||||||
|
|
||||||
Recipe directory
|
|
||||||
----------------
|
|
||||||
|
|
||||||
The recipe directory is named after the archive _omitting the archive version_
|
|
||||||
and contains at least one file named _hash_. This file defines the version
|
|
||||||
of the archive along with a hash value of the archive's content
|
|
||||||
separated by a space character. By tying the version name to a particular hash
|
|
||||||
value, the _extract_ tool is able to detect the appropriate points in time
|
|
||||||
whenever the version should be increased due to a change of the archive's
|
|
||||||
content.
|
|
||||||
|
|
||||||
API, source, and raw-data archive recipes
|
|
||||||
-----------------------------------------
|
|
||||||
|
|
||||||
Recipe directories for API, source, or raw-data archives contain a
|
|
||||||
_content.mk_ file that defines the archive content in the form of make
|
|
||||||
rules. The content.mk file is executed from the archive's location within
|
|
||||||
the depot. Hence, the contained rules can refer to archive-relative files as targets.
|
|
||||||
The first (default) rule of the content.mk file is executed with a customized
|
|
||||||
make environment:
|
|
||||||
|
|
||||||
:GENODE_DIR: A variable that holds the path to root of the Genode source tree,
|
|
||||||
:REP_DIR: A variable with the path to source code repository where the recipe
|
|
||||||
is located
|
|
||||||
:port_dir: A make function that returns the directory of a port within the
|
|
||||||
_contrib/_ directory. The function expects the location of the
|
|
||||||
corresponding port file as argument, for example, the 'zlib' recipe
|
|
||||||
residing in the _libports/_ repository may specify '$(REP_DIR)/ports/zlib'
|
|
||||||
to access the 3rd-party zlib source code.
|
|
||||||
|
|
||||||
Source archive recipes contain simplified versions of the 'used_apis' and
|
|
||||||
(for libraries) 'api' files as found in the archives. In contrast to the
|
|
||||||
depot's counterparts of these files, which contain version-suffixed names,
|
|
||||||
the files contained in recipe directories omit the version suffix. This
|
|
||||||
is possible because the extract tool always extracts the _current_ version
|
|
||||||
of a given archive from the source tree. This current version is already
|
|
||||||
defined in the corresponding recipe directory.
|
|
||||||
|
|
||||||
Package-archive recipes
|
|
||||||
-----------------------
|
|
||||||
|
|
||||||
The recipe directory for a package archive contains the verbatim content of
|
|
||||||
the to-be-created package archive except for the _archives_ file. All other
|
|
||||||
files are copied verbatim to the archive. The content of the recipe's
|
|
||||||
_archives_ file may omit the version information from the listed ingredients.
|
|
||||||
Furthermore, the user part of each entry can be left blank by using '_' as a
|
|
||||||
wildcard. When generating the package archive from the recipe, the extract
|
|
||||||
tool will replace this wildcard with the user that creates the archive.
|
|
||||||
|
|
||||||
|
|
||||||
Convenience front-end to the extract, build tools
|
|
||||||
#################################################
|
|
||||||
|
|
||||||
For developers, the work flow of interacting with the depot is most often the
|
|
||||||
combination of the _extract_ and _build_ tools whereas the latter expects
|
|
||||||
concrete version names as arguments. The _create_ tool accelerates this common
|
|
||||||
usage pattern by allowing the user to omit the version names. Operations
|
|
||||||
implicitly refer to the _current_ version of the archives as defined in
|
|
||||||
the recipes.
|
|
||||||
|
|
||||||
Furthermore, the _create_ tool is able to manage version updates for the
|
|
||||||
developer. If invoked with the argument 'UPDATE_VERSIONS=1', it automatically
|
|
||||||
updates hash files of the involved recipes by taking the current date as
|
|
||||||
version name. This is a valuable assistance in situations where a commonly
|
|
||||||
used API changes. In this case, the versions of the API and all dependent
|
|
||||||
archives must be increased, which would be a labour-intensive task otherwise.
|
|
||||||
If the depot already contains an archive of the current version, the create
|
|
||||||
tools won't re-create the depot archive by default. Local modifications of
|
|
||||||
the source code in the repository do not automatically result in a new archive.
|
|
||||||
To ensure that the depot archive is current, one can specify 'FORCE=1' to
|
|
||||||
the create tool. With this argument, existing depot archives are replaced by
|
|
||||||
freshly extracted ones and version updates are detected. When specified for
|
|
||||||
creating binary archives, 'FORCE=1' normally implies 'REBUILD=1'. To prevent
|
|
||||||
the superfluous rebuild of binary archives whose source versions remain
|
|
||||||
unchanged, 'FORCE=1' can be combined with the argument 'REBUILD='.
|
|
||||||
|
|
||||||
|
|
||||||
Accessing depot content from run scripts
|
|
||||||
########################################
|
|
||||||
|
|
||||||
The depot tools are not meant to replace the run tool but rather to complement
|
|
||||||
it. When both tools are combined, the run tool implicitly refers to "current"
|
|
||||||
archive versions as defined for the archive's corresponding recipes. This way,
|
|
||||||
the regular run-tool work flow can be maintained while attaining a
|
|
||||||
productivity boost by fetching content from the depot instead of building it.
|
|
||||||
|
|
||||||
Run scripts can use the 'import_from_depot' function to incorporate archive
|
|
||||||
content from the depot into a scenario. The function must be called after the
|
|
||||||
'create_boot_directory' function and takes any number of pkg, src, or raw
|
|
||||||
archives as arguments. An archive is specified as depot-relative path of the
|
|
||||||
form '<user>/<type>/name'. Run scripts may call 'import_from_depot'
|
|
||||||
repeatedly. Each argument can refer to a specific version of an archive or
|
|
||||||
just the version-less archive name. In the latter case, the current version
|
|
||||||
(as defined by a corresponding archive recipe in the source tree) is used.
|
|
||||||
|
|
||||||
If a 'src' archive is specified, the run tool integrates the content of
|
|
||||||
the corresponding binary archive into the scenario. The binary archives
|
|
||||||
are selected according the spec values as defined for the build directory.
|
|
||||||
|
|
@ -1,154 +0,0 @@
|
|||||||
|
|
||||||
=============================
|
|
||||||
How to start exploring Genode
|
|
||||||
=============================
|
|
||||||
|
|
||||||
Norman Feske
|
|
||||||
|
|
||||||
|
|
||||||
Abstract
|
|
||||||
########
|
|
||||||
|
|
||||||
This guide is meant to provide you a painless start with using the Genode OS
|
|
||||||
Framework. It explains the steps needed to get a simple demo system running
|
|
||||||
on Linux first, followed by the instructions on how to run the same scenario
|
|
||||||
on a microkernel.
|
|
||||||
|
|
||||||
|
|
||||||
Quick start to build Genode for Linux
|
|
||||||
#####################################
|
|
||||||
|
|
||||||
The best starting point for exploring Genode is to run it on Linux. Make sure
|
|
||||||
that your system satisfies the following requirements:
|
|
||||||
|
|
||||||
* GNU Make version 3.81 or newer
|
|
||||||
* 'libsdl2-dev', 'libdrm-dev', and 'libgbm-dev' (needed to run interactive
|
|
||||||
system scenarios directly on Linux)
|
|
||||||
* 'tclsh' and 'expect'
|
|
||||||
* 'byacc' (only needed for the L4/Fiasco kernel)
|
|
||||||
* 'qemu' and 'xorriso' (for testing non-Linux platforms via Qemu)
|
|
||||||
|
|
||||||
For using the entire collection of ported 3rd-party software, the following
|
|
||||||
packages should be installed additionally: 'autoconf2.64', 'autogen', 'bison',
|
|
||||||
'flex', 'g++', 'git', 'gperf', 'libxml2-utils', 'subversion', and 'xsltproc'.
|
|
||||||
|
|
||||||
Your exploration of Genode starts with obtaining the source code of the
|
|
||||||
[https://sourceforge.net/projects/genode/files/latest/download - latest version]
|
|
||||||
of the framework. For detailed instructions and alternatives to the
|
|
||||||
download from Sourceforge please refer to [https://genode.org/download].
|
|
||||||
Furthermore, you will need to install the official Genode tool chain, which
|
|
||||||
you can download at [https://genode.org/download/tool-chain].
|
|
||||||
|
|
||||||
The Genode build system never touches the source tree but generates object
|
|
||||||
files, libraries, and programs in a dedicated build directory. We do not have a
|
|
||||||
build directory yet. For a quick start, let us create one for the Linux base
|
|
||||||
platform:
|
|
||||||
|
|
||||||
! cd <genode-dir>
|
|
||||||
! ./tool/create_builddir x86_64
|
|
||||||
|
|
||||||
This creates a new build directory for building x86_64 binaries in './build'.
|
|
||||||
The build system creates unified binaries that work on the given
|
|
||||||
architecture independent from the underlying base platform, in this case Linux.
|
|
||||||
|
|
||||||
Now change into the fresh build directory:
|
|
||||||
|
|
||||||
! cd build/x86_64
|
|
||||||
|
|
||||||
Please uncomment the following line in 'etc/build.conf' to make the
|
|
||||||
build process as smooth as possible.
|
|
||||||
|
|
||||||
! RUN_OPT += --depot-auto-update
|
|
||||||
|
|
||||||
To give Genode a try, build and execute a simple demo scenario via:
|
|
||||||
|
|
||||||
! make KERNEL=linux BOARD=linux run/demo
|
|
||||||
|
|
||||||
By invoking 'make' with the 'run/demo' argument, all components needed by the
|
|
||||||
demo scenario are built and the demo is executed. This includes all components
|
|
||||||
which are implicitly needed by the base platform. The base platform that the
|
|
||||||
components will be executed upon on is selected via the 'KERNEL' and 'BOARD'
|
|
||||||
variables. If you are interested in looking behind the scenes of the demo
|
|
||||||
scenario, please refer to 'doc/build_system.txt' and the run script at
|
|
||||||
'os/run/demo.run'.
|
|
||||||
|
|
||||||
|
|
||||||
Using platforms other than Linux
|
|
||||||
================================
|
|
||||||
|
|
||||||
Running Genode on Linux is the most convenient way to get acquainted with the
|
|
||||||
framework. However, the point where Genode starts to shine is when used as the
|
|
||||||
user land executed on a microkernel. The framework supports a variety of
|
|
||||||
different kernels such as L4/Fiasco, L4ka::Pistachio, OKL4, and NOVA. Those
|
|
||||||
kernels largely differ in terms of feature sets, build systems, tools, and boot
|
|
||||||
concepts. To relieve you from dealing with those peculiarities, Genode provides
|
|
||||||
you with an unified way of using them. For each kernel platform, there exists
|
|
||||||
a dedicated description file that enables the 'prepare_port' tool to fetch and
|
|
||||||
prepare the designated 3rd-party sources. Just issue the following command
|
|
||||||
within the toplevel directory of the Genode source tree:
|
|
||||||
|
|
||||||
! ./tool/ports/prepare_port <platform>
|
|
||||||
|
|
||||||
Note that each 'base-<platform>' directory comes with a 'README' file, which
|
|
||||||
you should revisit first when exploring the base platform. Additionally, most
|
|
||||||
'base-<platform>' directories provide more in-depth information within their
|
|
||||||
respective 'doc/' subdirectories.
|
|
||||||
|
|
||||||
For the VESA driver on x86, the x86emu library is required and can be
|
|
||||||
downloaded and prepared by again invoking the 3rd-party sources preparation
|
|
||||||
tool:
|
|
||||||
|
|
||||||
! ./tool/ports/prepare_port x86emu
|
|
||||||
|
|
||||||
On x86 base platforms the GRUB2 boot loader is required and can be
|
|
||||||
downloaded and prepared by invoking:
|
|
||||||
|
|
||||||
! ./tool/ports/prepare_port grub2
|
|
||||||
|
|
||||||
Now that the base platform is prepared, the 'create_builddir' tool can be used
|
|
||||||
to create a build directory for your architecture of choice by giving the
|
|
||||||
architecture as argument. To see the list of available architecture, execute
|
|
||||||
'create_builddir' with no arguments. Note, that not all kernels support all
|
|
||||||
architectures.
|
|
||||||
|
|
||||||
For example, to give the demo scenario a spin on the OKL4 kernel, the following
|
|
||||||
steps are required:
|
|
||||||
|
|
||||||
# Download the kernel:
|
|
||||||
! cd <genode-dir>
|
|
||||||
! ./tool/ports/prepare_port okl4
|
|
||||||
# Create a build directory
|
|
||||||
! ./tool/create_builddir x86_32
|
|
||||||
# Uncomment the following line in 'x86_32/etc/build.conf'
|
|
||||||
! REPOSITORIES += $(GENODE_DIR)/repos/libports
|
|
||||||
# Build and execute the demo using Qemu
|
|
||||||
! make -C build/x86_32 KERNEL=okl4 BOARD=pc run/demo
|
|
||||||
|
|
||||||
The procedure works analogously for the other base platforms. You can, however,
|
|
||||||
reuse the already created build directory and skip its creation step if the
|
|
||||||
architecture matches.
|
|
||||||
|
|
||||||
|
|
||||||
How to proceed with exploring Genode
|
|
||||||
####################################
|
|
||||||
|
|
||||||
Now that you have taken the first steps into using Genode, you may seek to
|
|
||||||
get more in-depth knowledge and practical experience. The foundation for doing
|
|
||||||
so is a basic understanding of the build system. The documentation at
|
|
||||||
'build_system.txt' provides you with the information about the layout of the
|
|
||||||
source tree, how new components are integrated, and how complete system
|
|
||||||
scenarios can be expressed. Equipped with this knowledge, it is time to get
|
|
||||||
hands-on experience with creating custom Genode components. A good start is the
|
|
||||||
'hello_tutorial', which shows you how to implement a simple client-server
|
|
||||||
scenario. To compose complex scenarios out of many small components, the
|
|
||||||
documentation of the Genode's configuration concept at 'os/doc/init.txt' is an
|
|
||||||
essential reference.
|
|
||||||
|
|
||||||
Certainly, you will have further questions on your way with exploring Genode.
|
|
||||||
The best place to get these questions answered is the Genode mailing list.
|
|
||||||
Please feel welcome to ask your questions and to join the discussions:
|
|
||||||
|
|
||||||
:Genode Mailing Lists:
|
|
||||||
|
|
||||||
[https://genode.org/community/mailing-lists]
|
|
||||||
|
|
@ -1,236 +0,0 @@
|
|||||||
|
|
||||||
|
|
||||||
==========================
|
|
||||||
Google Summer of Code 2012
|
|
||||||
==========================
|
|
||||||
|
|
||||||
|
|
||||||
Genode Labs has applied as mentoring organization for the Google Summer of Code
|
|
||||||
program in 2012. This document summarizes all information important to Genode's
|
|
||||||
participation in the program.
|
|
||||||
|
|
||||||
:[http://www.google-melange.com/gsoc/homepage/google/gsoc2012]:
|
|
||||||
Visit the official homepage of the Google Summer of Code program.
|
|
||||||
|
|
||||||
*Update* Genode Labs was not accepted as mentoring organization for GSoC 2012.
|
|
||||||
|
|
||||||
|
|
||||||
Application of Genode Labs as mentoring organization
|
|
||||||
####################################################
|
|
||||||
|
|
||||||
:Organization ID: genodelabs
|
|
||||||
|
|
||||||
:Organization name: Genode Labs
|
|
||||||
|
|
||||||
:Organization description:
|
|
||||||
|
|
||||||
Genode Labs is a self-funded company founded by the original creators of the
|
|
||||||
Genode OS project. Its primary mission is to bring the Genode operating-system
|
|
||||||
technology, which started off as an academic research project, to the real
|
|
||||||
world. At present, Genode Labs is the driving force behind the Genode OS
|
|
||||||
project.
|
|
||||||
|
|
||||||
:Organization home page url:
|
|
||||||
|
|
||||||
http://www.genode-labs.com
|
|
||||||
|
|
||||||
:Main organization license:
|
|
||||||
|
|
||||||
GNU General Public License version 2
|
|
||||||
|
|
||||||
:Admins:
|
|
||||||
|
|
||||||
nfeske, chelmuth
|
|
||||||
|
|
||||||
:What is the URL for your Ideas page?:
|
|
||||||
|
|
||||||
[http://genode.org/community/gsoc_2012]
|
|
||||||
|
|
||||||
:What is the main IRC channel for your organization?:
|
|
||||||
|
|
||||||
#genode
|
|
||||||
|
|
||||||
:What is the main development mailing list for your organization?:
|
|
||||||
|
|
||||||
genode-main@lists.sourceforge.net
|
|
||||||
|
|
||||||
:Why is your organization applying to participate? What do you hope to gain?:
|
|
||||||
|
|
||||||
During the past three months, our project underwent the transition from a
|
|
||||||
formerly company-internal development to a completely open and transparent
|
|
||||||
endeavour. By inviting a broad community for participation in shaping the
|
|
||||||
project, we hope to advance Genode to become a broadly used and recognised
|
|
||||||
technology. GSoC would help us to build our community.
|
|
||||||
|
|
||||||
The project has its roots at the University of Technology Dresden where the
|
|
||||||
Genode founders were former members of the academic research staff. We have
|
|
||||||
a long and successful track record with regard to supervising students. GSoC
|
|
||||||
would provide us with the opportunity to establish and cultivate
|
|
||||||
relationships to new students and to spawn excitement about Genode OS
|
|
||||||
technology.
|
|
||||||
|
|
||||||
:Does your organization have an application templateo?:
|
|
||||||
|
|
||||||
GSoC student projects follow the same procedure as regular community
|
|
||||||
contributions, in particular the student is expected to sign the Genode
|
|
||||||
Contributor's Agreement. (see [http://genode.org/community/contributions])
|
|
||||||
|
|
||||||
:What criteria did you use to select your mentors?:
|
|
||||||
|
|
||||||
We selected the mentors on the basis of their long-time involvement with the
|
|
||||||
project and their time-tested communication skills. For each proposed working
|
|
||||||
topic, there is least one stakeholder with profound technical background within
|
|
||||||
Genode Labs. This person will be the primary contact person for the student
|
|
||||||
working on the topic. However, we will encourgage the student to make his/her
|
|
||||||
development transparant to all community members (i.e., via GitHub). So
|
|
||||||
So any community member interested in the topic is able to bring in his/her
|
|
||||||
ideas at any stage of development. Consequently, in practive, there will be
|
|
||||||
multiple persons mentoring each students.
|
|
||||||
|
|
||||||
:What is your plan for dealing with disappearing students?:
|
|
||||||
|
|
||||||
Actively contact them using all channels of communication available to us,
|
|
||||||
find out the reason for disappearance, trying to resolve the problems. (if
|
|
||||||
they are related to GSoC or our project for that matter).
|
|
||||||
|
|
||||||
:What is your plan for dealing with disappearing mentors?:
|
|
||||||
|
|
||||||
All designated mentors are local to Genode Labs. So the chance for them to
|
|
||||||
disappear to very low. However, if a mentor disappears for any serious reason
|
|
||||||
(i.e., serious illness), our organization will provide a back-up mentor.
|
|
||||||
|
|
||||||
:What steps will you take to encourage students to interact with your community?:
|
|
||||||
|
|
||||||
First, we discussed GSoC on our mailing list where we received an overly
|
|
||||||
positive response. We checked back with other Open-Source projects related to
|
|
||||||
our topics, exchanged ideas, and tried to find synergies between our
|
|
||||||
respective projects. For most project ideas, we have created issues in our
|
|
||||||
issue tracker to collect technical information and discuss the topic.
|
|
||||||
For several topics, we already observed interests of students to participate.
|
|
||||||
|
|
||||||
During the work on the topics, the mentors will try to encourage the
|
|
||||||
students to play an active role in discussions on our mailing list, also on
|
|
||||||
topics that are not strictly related to the student project. We regard an
|
|
||||||
active participation as key to to enable new community members to develop a
|
|
||||||
holistic view onto our project and gather a profound understanding of our
|
|
||||||
methodologies.
|
|
||||||
|
|
||||||
Student projects will be carried out in a transparent fashion at GitHub.
|
|
||||||
This makes it easy for each community member to get involved, discuss
|
|
||||||
the rationale behind design decisions, and audit solutions.
|
|
||||||
|
|
||||||
|
|
||||||
Topics
|
|
||||||
######
|
|
||||||
|
|
||||||
While discussing GSoC participation on our mailing list, we identified the
|
|
||||||
following topics as being well suited for GSoC projects. However, if none of
|
|
||||||
those topics receives resonance from students, there is more comprehensive list
|
|
||||||
of topics available at our road map and our collection of future challenges:
|
|
||||||
|
|
||||||
:[http://genode.org/about/road-map]: Road-map
|
|
||||||
:[http://genode.org/about/challenges]: Challenges
|
|
||||||
|
|
||||||
|
|
||||||
Combining Genode with the HelenOS/SPARTAN kernel
|
|
||||||
================================================
|
|
||||||
|
|
||||||
[http://www.helenos.org - HelenOS] is a microkernel-based multi-server OS
|
|
||||||
developed at the university of Prague. It is based on the SPARTAN microkernel,
|
|
||||||
which runs on a wide variety of CPU architectures including Sparc, MIPS, and
|
|
||||||
PowerPC. This broad platform support makes SPARTAN an interesting kernel to
|
|
||||||
look at alone. But a further motivation is the fact that SPARTAN does not
|
|
||||||
follow the classical L4 road, providing a kernel API that comes with an own
|
|
||||||
terminology and different kernel primitives. This makes the mapping of
|
|
||||||
SPARTAN's kernel API to Genode a challenging endeavour and would provide us
|
|
||||||
with feedback regarding the universality of Genode's internal interfaces.
|
|
||||||
Finally, this project has the potential to ignite a further collaboration
|
|
||||||
between the HelenOS and Genode communities.
|
|
||||||
|
|
||||||
|
|
||||||
Block-level encryption
|
|
||||||
======================
|
|
||||||
|
|
||||||
Protecting privacy is one of the strongest motivational factors for developing
|
|
||||||
Genode. One pivotal element with that respect is the persistence of information
|
|
||||||
via block-level encryption. For example, to use Genode every day at Genode
|
|
||||||
Labs, it's crucial to protect the confidentiality of some information that's
|
|
||||||
not part of the Genode code base, e.g., emails and reports. There are several
|
|
||||||
expansion stages imaginable to reach the goal and the basic building blocks
|
|
||||||
(block-device interface, ATA/SATA driver for Qemu) are already in place.
|
|
||||||
|
|
||||||
:[https://github.com/genodelabs/genode/issues/55 - Discuss the issue...]:
|
|
||||||
|
|
||||||
|
|
||||||
Virtual NAT
|
|
||||||
===========
|
|
||||||
|
|
||||||
For sharing one physical network interface among multiple applications, Genode
|
|
||||||
comes with a component called nic_bridge, which implements proxy ARP. Through
|
|
||||||
this component, each application receives a distinct (virtual) network
|
|
||||||
interface that is visible to the real network. I.e., each application requests
|
|
||||||
an IP address via a DHCP request at the local network. An alternative approach
|
|
||||||
would be a component that implements NAT on Genode's NIC session interface.
|
|
||||||
This way, the whole Genode system would use only one IP address visible to the
|
|
||||||
local network. (by stacking multiple nat and nic_bridge components together, we
|
|
||||||
could even form complex virtual networks inside a single Genode system)
|
|
||||||
|
|
||||||
The implementation of the virtual NAT could follow the lines of the existing
|
|
||||||
nic_bridge component. For parsing network packets, there are already some handy
|
|
||||||
utilities available (at os/include/net/).
|
|
||||||
|
|
||||||
:[https://github.com/genodelabs/genode/issues/114 - Discuss the issue...]:
|
|
||||||
|
|
||||||
|
|
||||||
Runtime for the Go or D programming language
|
|
||||||
============================================
|
|
||||||
|
|
||||||
Genode is implemented in C++. However, we are repeatedly receiving requests
|
|
||||||
for offering more safe alternatives for implementing OS-level functionality
|
|
||||||
such as device drivers, file systems, and other protocol stacks. The goals
|
|
||||||
for this project are to investigate the Go and D programming languages with
|
|
||||||
respect to their use within Genode, port the runtime of of those languages
|
|
||||||
to Genode, and provide a useful level of integration with Genode.
|
|
||||||
|
|
||||||
|
|
||||||
Block cache
|
|
||||||
===========
|
|
||||||
|
|
||||||
Currently, there exists only the iso9660 server that is able to cache block
|
|
||||||
accesses. A generic solution for caching block-device accesses would be nice.
|
|
||||||
One suggestion is a component that requests a block session (routed to a block
|
|
||||||
device driver) as back end and also announces a block service (front end)
|
|
||||||
itself. Such a block-cache server waits for requests at the front end and
|
|
||||||
forwards them to the back end. But it uses its own memory to cache blocks.
|
|
||||||
|
|
||||||
The first version could support only read-only block devices (such as CDROM) by
|
|
||||||
caching the results of read accesses. In this version, we already need an
|
|
||||||
eviction strategy that kicks in once the block cache gets saturated. For a
|
|
||||||
start this could be FIFO or LRU (least recently used).
|
|
||||||
|
|
||||||
A more sophisticated version would support write accesses, too. Here we need a
|
|
||||||
way to sync blocks to the back end at regular intervals in order to guarantee
|
|
||||||
that all block-write accesses are becoming persistent after a certain time. We
|
|
||||||
would also need a way to explicitly flush the block cache (i.e., when the
|
|
||||||
front-end block session gets closed).
|
|
||||||
|
|
||||||
:[https://github.com/genodelabs/genode/issues/113 - Discuss the issue...]:
|
|
||||||
|
|
||||||
|
|
||||||
; _Since Genode Labs was not accepted as GSoC mentoring organization, the_
|
|
||||||
; _following section has become irrelevant. Hence, it is commented-out_
|
|
||||||
;
|
|
||||||
; Student applications
|
|
||||||
; ####################
|
|
||||||
;
|
|
||||||
; The formal steps for applying to the GSoC program will be posted once Genode
|
|
||||||
; Labs is accepted as mentoring organization. If you are a student interested
|
|
||||||
; in working on a Genode-related GSoC project, now is a good time to get
|
|
||||||
; involved with the Genode community. The best way is joining the discussions
|
|
||||||
; at our mailing list and the issue tracker. This way, you will learn about
|
|
||||||
; the currently relevant topics, our discussion culture, and the people behind
|
|
||||||
; the project.
|
|
||||||
;
|
|
||||||
; :[http://genode.org/community/mailing-lists]: Join our mailing list
|
|
||||||
; :[https://github.com/genodelabs/genode/issues]: Discuss issues around Genode
|
|
||||||
|
|
File diff suppressed because it is too large
Load Diff
@ -1,314 +0,0 @@
|
|||||||
|
|
||||||
========================================
|
|
||||||
Configuring the init component of Genode
|
|
||||||
========================================
|
|
||||||
|
|
||||||
Norman Feske
|
|
||||||
|
|
||||||
|
|
||||||
The Genode architecture facilitates the flexible construction of complex usage
|
|
||||||
scenarios out of Genode's components used as generic building blocks. Thanks
|
|
||||||
to the strictly hierarchic and, at the same time, recursive structure of
|
|
||||||
Genode, a parent has full control over the way, its children interact with each
|
|
||||||
other and with the parent. The init component plays a special role in that
|
|
||||||
picture. At boot time, it gets started by core, gets assigned all physical
|
|
||||||
resources, and controls the execution of all further components, which can
|
|
||||||
be further instances of init. Init's policy is driven by a configuration file,
|
|
||||||
which declares a number of children, their relationships, and resource
|
|
||||||
assignments. This document describes the configuration mechansism to steer the
|
|
||||||
policy of the init component. The configuration is described in a single XML file
|
|
||||||
called 'config' supplied via core's ROM service.
|
|
||||||
|
|
||||||
|
|
||||||
Configuration
|
|
||||||
#############
|
|
||||||
|
|
||||||
At the parent-child interface, there are two operations that are subject to
|
|
||||||
policy decisions of the parent, the child announcing a service and the
|
|
||||||
child requesting a service. If a child announces a service, the parent is up
|
|
||||||
to decide if and how to make this service accessible to its other children.
|
|
||||||
When a child requests a service, the parent may deny the session request,
|
|
||||||
delegate the request to its own parent, implement the requested service
|
|
||||||
locally, or open a session at one of its other children. This decision may
|
|
||||||
depend on the requested service or session-construction arguments provided
|
|
||||||
by the child. Apart from assigning resources to children, the central
|
|
||||||
element of the policy implemented in the parent is a set of rules to
|
|
||||||
route session requests. Therefore, init's configuration concept is laid out
|
|
||||||
around components and the routing of session requests. The concept is best
|
|
||||||
illustrated by an example (the following config file can be used on Linux):
|
|
||||||
|
|
||||||
! <config>
|
|
||||||
! <parent-provides>
|
|
||||||
! <service name="LOG"/>
|
|
||||||
! </parent-provides>
|
|
||||||
! <start name="timer">
|
|
||||||
! <resource name="RAM" quantum="1M"/>
|
|
||||||
! <provides> <service name="Timer"/> </provides>
|
|
||||||
! </start>
|
|
||||||
! <start name="test-timer">
|
|
||||||
! <resource name="RAM" quantum="1M"/>
|
|
||||||
! <route>
|
|
||||||
! <service name="Timer"> <child name="timer"/> </service>
|
|
||||||
! <service name="LOG"> <parent/> </service>
|
|
||||||
! </route>
|
|
||||||
! </start>
|
|
||||||
! </config>
|
|
||||||
|
|
||||||
First, there is the declaration of services provided by the parent of the
|
|
||||||
configured init instance. In this case, we declare that the parent provides a
|
|
||||||
LOG service. For each child to start, there is a '<start>'
|
|
||||||
node describing resource assignments, declaring services provided by the child,
|
|
||||||
and holding a routing table for session requests originating from the child.
|
|
||||||
The first child is called "timer" and implements the "Timer" service. The
|
|
||||||
second component called "test-timer" is a client of the timer service. In its
|
|
||||||
routing table, we see that requests for "Timer" sessions should be routed to
|
|
||||||
the "timer" child whereas requests for "LOG" sessions should be delegated to
|
|
||||||
init's parent. Per-child service routing rules provide a flexible way to
|
|
||||||
express arbitrary client-server relationships. For example, service requests
|
|
||||||
may be transparently mediated through special policy components acting upon
|
|
||||||
session-construction arguments. There might be multiple children implementing
|
|
||||||
the same service, each addressed by different routing tables. If there is no
|
|
||||||
valid route to a requested service, the service is denied. In the example
|
|
||||||
above, the routing tables act effectively as a whitelist of services the child
|
|
||||||
is allowed to use.
|
|
||||||
|
|
||||||
In practice, usage scenarios become more complex than the basic example,
|
|
||||||
increasing the size of routing tables. Furthermore, in many practical cases,
|
|
||||||
multiple children may use the same set of services, and require duplicated
|
|
||||||
routing tables within the configuration. In particular during development, the
|
|
||||||
elaborative specification of routing tables tend to become an inconvenience.
|
|
||||||
To alleviate this problem, there are two mechanisms, wildcards and a default
|
|
||||||
route. Instead of specifying a list of single service routes targeting the same
|
|
||||||
destination, the wildcard '<any-service>' becomes handy. For example, instead
|
|
||||||
of specifying
|
|
||||||
! <route>
|
|
||||||
! <service name="ROM"> <parent/> </service>
|
|
||||||
! <service name="RM"> <parent/> </service>
|
|
||||||
! <service name="PD"> <parent/> </service>
|
|
||||||
! <service name="CPU"> <parent/> </service>
|
|
||||||
! </route>
|
|
||||||
the following shortcut can be used:
|
|
||||||
! <route>
|
|
||||||
! <any-service> <parent/> </any-service>
|
|
||||||
! </route>
|
|
||||||
The latter version is not as strict as the first one because it permits the
|
|
||||||
child to create sessions at the parent, which were not whitelisted in the
|
|
||||||
elaborative version. Therefore, the use of wildcards is discouraged for
|
|
||||||
configuring untrusted components. Wildcards and explicit routes may be combined
|
|
||||||
as illustrated by the following example:
|
|
||||||
! <route>
|
|
||||||
! <service name="LOG"> <child name="nitlog"/> </service>
|
|
||||||
! <any-service> <parent/> </any-service>
|
|
||||||
! </route>
|
|
||||||
The routing table is processed starting with the first entry. If the route
|
|
||||||
matches the service request, it is taken, otherwise the remaining
|
|
||||||
routing-table entries are visited. This way, the explicit service route of
|
|
||||||
"LOG" sessions to "nitlog" shadows the LOG service provided by the parent.
|
|
||||||
|
|
||||||
To emulate the traditional init policy, which allowed a child to use services
|
|
||||||
provided by arbitrary other children, there is a further wildcard called
|
|
||||||
'<any-child>'. Using this wildcard, such a policy can be expressed as follows:
|
|
||||||
! <route>
|
|
||||||
! <any-service> <parent/> </any-service>
|
|
||||||
! <any-service> <any-child/> </any-service>
|
|
||||||
! </route>
|
|
||||||
This rule would delegate all session requests referring to one of the parent's
|
|
||||||
services to the parent. If no parent service matches the session request, the
|
|
||||||
request is routed to any child providing the service. The rule can be further
|
|
||||||
reduced to:
|
|
||||||
! <route>
|
|
||||||
! <any-service> <parent/> <any-child/> </any-service>
|
|
||||||
! </route>
|
|
||||||
Potential ambiguities caused by multiple children providing the same service
|
|
||||||
are detected automatically. In this case, the ambiguity must be resolved using
|
|
||||||
an explicit route preceding the wildcards.
|
|
||||||
|
|
||||||
To reduce the need to specify the same routing table for many children
|
|
||||||
in one configuration, there is a '<default-route>' mechanism. The default
|
|
||||||
route is declared within the '<config>' node and used for each '<start>'
|
|
||||||
entry with no '<route>' node. In particular during development, the default
|
|
||||||
route becomes handy to keep the configuration tidy and neat.
|
|
||||||
|
|
||||||
The combination of explicit routes and wildcards is designed to scale well from
|
|
||||||
being convenient to use during development towards being highly secure at
|
|
||||||
deployment time. If only explicit rules are present in the configuration, the
|
|
||||||
permitted relationships between all components are explicitly defined and can be
|
|
||||||
easily verified. Note however that the degree those rules are enforced at the
|
|
||||||
kernel-interface level depends on the used base platform.
|
|
||||||
|
|
||||||
|
|
||||||
Advanced features
|
|
||||||
#################
|
|
||||||
|
|
||||||
In addition to the service routing facility described in the previous section,
|
|
||||||
the following features are worth noting:
|
|
||||||
|
|
||||||
|
|
||||||
Resource quota saturation
|
|
||||||
=========================
|
|
||||||
|
|
||||||
If a specified resource (i.e., RAM quota) exceeds the available resources.
|
|
||||||
The available resources are assigned completely to the child. This makes
|
|
||||||
it possible to assign all remaining resources to the last child by
|
|
||||||
simply specifying an overly large quantum.
|
|
||||||
|
|
||||||
|
|
||||||
Multiple instantiation of a single ELF binary
|
|
||||||
=============================================
|
|
||||||
|
|
||||||
Each '<start>' node requires a unique 'name' attribute. By default, the
|
|
||||||
value of this attribute is used as file name for obtaining the ELF
|
|
||||||
binary at the parent's ROM service. If multiple instances of the same
|
|
||||||
ELF binary are needed, the binary name can be explicitly specified
|
|
||||||
using a '<binary>' sub node of the '<start>' node:
|
|
||||||
! <binary name="filename"/>
|
|
||||||
This way, the unique child names can be chosen independently from the
|
|
||||||
binary file name.
|
|
||||||
|
|
||||||
|
|
||||||
Nested configuration
|
|
||||||
====================
|
|
||||||
|
|
||||||
Each '<start>' node can host a '<config>' sub node. The content of this sub
|
|
||||||
node is provided to the child when a ROM session for the file name "config" is
|
|
||||||
requested. Thereby, arbitrary configuration parameters can be passed to the
|
|
||||||
child. For example, the following configuration starts 'timer-test' within an
|
|
||||||
init instance within another init instance. To show the flexibility of init's
|
|
||||||
service routing facility, the "Timer" session of the second-level 'timer-test'
|
|
||||||
child is routed to the timer service started at the first-level init instance.
|
|
||||||
! <config>
|
|
||||||
! <parent-provides>
|
|
||||||
! <service name="LOG"/>
|
|
||||||
! <service name="ROM"/>
|
|
||||||
! <service name="CPU"/>
|
|
||||||
! <service name="RM"/>
|
|
||||||
! <service name="PD"/>
|
|
||||||
! </parent-provides>
|
|
||||||
! <start name="timer">
|
|
||||||
! <resource name="RAM" quantum="1M"/>
|
|
||||||
! <provides><service name="Timer"/></provides>
|
|
||||||
! </start>
|
|
||||||
! <start name="init">
|
|
||||||
! <resource name="RAM" quantum="1M"/>
|
|
||||||
! <config>
|
|
||||||
! <parent-provides>
|
|
||||||
! <service name="Timer"/>
|
|
||||||
! <service name="LOG"/>
|
|
||||||
! </parent-provides>
|
|
||||||
! <start name="test-timer">
|
|
||||||
! <resource name="RAM" quantum="1M"/>
|
|
||||||
! <route>
|
|
||||||
! <service name="Timer"> <parent/> </service>
|
|
||||||
! <service name="LOG"> <parent/> </service>
|
|
||||||
! </route>
|
|
||||||
! </start>
|
|
||||||
! </config>
|
|
||||||
! <route>
|
|
||||||
! <service name="Timer"> <child name="timer"/> </service>
|
|
||||||
! <service name="LOG"> <parent/> </service>
|
|
||||||
! <service name="ROM"> <parent/> </service>
|
|
||||||
! <service name="CPU"> <parent/> </service>
|
|
||||||
! <service name="RM"> <parent/> </service>
|
|
||||||
! <service name="PD"> <parent/> </service>
|
|
||||||
! </route>
|
|
||||||
! </start>
|
|
||||||
! </config>
|
|
||||||
The services ROM, CPU, RM, and PD are required by the second-level
|
|
||||||
init instance to create the timer-test component.
|
|
||||||
|
|
||||||
As illustrated by this example, the use of the nested configuration feature
|
|
||||||
enables the construction of arbitrarily complex component trees via a single
|
|
||||||
configuration file.
|
|
||||||
|
|
||||||
|
|
||||||
Assigning subsystems to CPUs
|
|
||||||
============================
|
|
||||||
|
|
||||||
The assignment of subsystems to CPU nodes consists of two parts, the
|
|
||||||
definition of the affinity space dimensions as used for the init component, and
|
|
||||||
the association sub systems with affinity locations (relative to the affinity
|
|
||||||
space). The affinity space is configured as a sub node of the config node. For
|
|
||||||
example, the following declaration describes an affinity space of 4x2:
|
|
||||||
|
|
||||||
! <config>
|
|
||||||
! ...
|
|
||||||
! <affinity-space width="4" height="2" />
|
|
||||||
! ...
|
|
||||||
! </config>
|
|
||||||
|
|
||||||
Subsystems can be constrained to parts of the affinity space using the
|
|
||||||
'<affinity>' sub node of a '<start>' entry:
|
|
||||||
|
|
||||||
! <config>
|
|
||||||
! ...
|
|
||||||
! <start name="loader">
|
|
||||||
! <affinity xpos="0" ypos="1" width="2" height="1" />
|
|
||||||
! ...
|
|
||||||
! </start>
|
|
||||||
! ...
|
|
||||||
! </config>
|
|
||||||
|
|
||||||
|
|
||||||
Priority support
|
|
||||||
================
|
|
||||||
|
|
||||||
The number of CPU priorities to be distinguished by init can be specified with
|
|
||||||
'prio_levels' attribute of the '<config>' node. The value must be a power of
|
|
||||||
two. By default, no priorities are used. To assign a priority to a child
|
|
||||||
component, a priority value can be specified as 'priority' attribute of the
|
|
||||||
corresponding '<start>' node. Valid priority values lie in the range of
|
|
||||||
-prio_levels + 1 (maximum priority degradation) to 0 (no priority degradation).
|
|
||||||
|
|
||||||
|
|
||||||
Verbosity
|
|
||||||
=========
|
|
||||||
|
|
||||||
To ease the debugging, init can be directed to print various status information
|
|
||||||
as LOG output. To enable the verbose mode, assign the value "yes" to the
|
|
||||||
'verbose' attribute of the '<config>' node.
|
|
||||||
|
|
||||||
|
|
||||||
Propagation of exit events
|
|
||||||
==========================
|
|
||||||
|
|
||||||
A component can notify its parent about its graceful exit via the exit RPC
|
|
||||||
function of the parent interface. By default, init responds to such a
|
|
||||||
notification from one of its children by merely printing a log message but
|
|
||||||
ignores it otherwise. However, there are scenarios where the exit of a
|
|
||||||
particular child should result in the exit of the entire init component. To
|
|
||||||
propagate the exit of a child to the parent of init, start nodes can host the
|
|
||||||
optional sub node '<exit>' with the attribute 'propagate' set to "yes".
|
|
||||||
|
|
||||||
! <config>
|
|
||||||
! <start name="noux">
|
|
||||||
! <exit propagate="yes"/>
|
|
||||||
! ...
|
|
||||||
! </start>
|
|
||||||
! </config>
|
|
||||||
|
|
||||||
The exit value specified by the exiting child is forwarded to init's parent.
|
|
||||||
|
|
||||||
|
|
||||||
Using the configuration concept
|
|
||||||
###############################
|
|
||||||
|
|
||||||
To get acquainted with the configuration format, there are two example
|
|
||||||
configuration files located at 'os/src/init/', which are both ready-to-use with
|
|
||||||
the Linux version of Genode. Both configurations produce the same scenario but
|
|
||||||
they differ in the way policy is expressed. The 'explicit_routing'
|
|
||||||
configuration is an example for the elaborative specification of all service
|
|
||||||
routes. All service requests not explicitly specified are denied. So this
|
|
||||||
policy is a whitelist enforcing mandatory access control on each session
|
|
||||||
request. The example illustrates well that such a elaborative specification is
|
|
||||||
possible in an intuitive manner. However, it is pretty comprehensive. In cases
|
|
||||||
where the elaborative specification of service routing is not fundamentally
|
|
||||||
important, in particular during development, the use of wildcards can help to
|
|
||||||
simplify the configuration. The 'wildcard' example demonstrates the use of a
|
|
||||||
default route for session-request resolution and wildcards. This variant is
|
|
||||||
less strict about which child uses which service. For development, its
|
|
||||||
simplicity is beneficial but for deployment, we recommend to remove wildcards
|
|
||||||
('<default-route>', '<any-child>', and '<any-service>') altogether. The
|
|
||||||
absence of such wildcards is easy to check automatically to ensure that service
|
|
||||||
routes are explicitly whitelisted.
|
|
||||||
|
|
||||||
Further configuration examples can be found in the 'os/config/' directory.
|
|
Loading…
Reference in New Issue
Block a user