diff --git a/doc/build_system.txt b/doc/build_system.txt deleted file mode 100644 index de926c09f3..0000000000 --- a/doc/build_system.txt +++ /dev/null @@ -1,517 +0,0 @@ - - - ======================= - The Genode build system - ======================= - - - Norman Feske - -Abstract -######## - -The Genode OS Framework comes with a custom build system that is designed for -the creation of highly modular and portable systems software. Understanding -its basic concepts is pivotal for using the full potential of the framework. -This document introduces those concepts and the best practises of putting them -to good use. Beside building software components from source code, common -and repetitive development tasks are the testing of individual components -and the integration of those components into complex system scenarios. To -streamline such tasks, the build system is accompanied with special tooling -support. This document introduces those tools. - - -Build directories and repositories -################################## - -The build system is supposed to never touch the source tree. The procedure of -building components and integrating them into system scenarios is done at -a distinct build directory. One build directory targets a specific platform, -i.e., a kernel and hardware architecture. Because the source tree is decoupled -from the build directory, one source tree can have many different build -directories associated, each targeted at another platform. - -The recommended way for creating a build directory is the use of the -'create_builddir' tool located at '/tool/'. By starting the tool -without arguments, its usage information will be printed. For creating a new -build directory, one of the listed target platforms must be specified. -Furthermore, the location of the new build directory has to be specified via -the 'BUILD_DIR=' argument. For example: - -! cd -! ./tool/create_builddir linux_x86 BUILD_DIR=/tmp/build.linux_x86 - -This command will create a new build directory for the Linux/x86 platform -at _/tmp/build.linux_x86/_. - - -Build-directory configuration via 'build.conf' -============================================== - -The fresh build directory will contain a 'Makefile', which is a symlink to -_tool/builddir/build.mk_. This makefile is the front end of the build system -and not supposed to be edited. Beside the makefile, there is a _etc/_ -subdirectory that contains the build-directory configuration. For most -platforms, there is only a single _build.conf_ file, which defines the parts of -the Genode source tree incorporated in the build process. Those parts are -called _repositories_. - -The repository concept allows for keeping the source code well separated for -different concerns. For example, the platform-specific code for each target -platform is located in a dedicated _base-_ repository. Also, different -abstraction levels and features of the system are residing in different -repositories. The _etc/build.conf_ file defines the set of repositories to -consider in the build process. At build time, the build system overlays the -directory structures of all repositories specified via the 'REPOSITORIES' -declaration to form a single logical source tree. By changing the list of -'REPOSITORIES', the view of the build system on the source tree can be altered. -The _etc/build.conf_ as found in a fresh created build directory will list the -_base-_ repository of the platform selected at the 'create_builddir' -command line as well as the 'base', 'os', and 'demo' repositories needed for -compiling Genode's default demonstration scenario. Furthermore, there are a -number of commented-out lines that can be uncommented for enabling additional -repositories. - -Note that the order of the repositories listed in the 'REPOSITORIES' declaration -is important. Front-most repositories shadow subsequent repositories. This -makes the repository mechanism a powerful tool for tweaking existing repositories: -By adding a custom repository in front of another one, customized versions of -single files (e.g., header files or target description files) can be supplied to -the build system without changing the original repository. - - -Building targets -================ - -To build all targets contained in the list of 'REPOSITORIES' as defined in -_etc/build.conf_, simply issue 'make'. This way, all components that are -compatible with the build directory's base platform will be built. In practice, -however, only some of those components may be of interest. Hence, the build -can be tailored to those components which are of actual interest by specifying -source-code subtrees. For example, using the following command -! make core server/nitpicker -the build system builds all targets found in the 'core' and 'server/nitpicker' -source directories. You may specify any number of subtrees to the build -system. As indicated by the build output, the build system revisits -each library that is used by each target found in the specified subtrees. -This is very handy for developing libraries because instead of re-building -your library and then your library-using program, you just build your program -and that's it. This concept even works recursively, which means that libraries -may depend on other libraries. - -In practice, you won't ever need to build the _whole tree_ but only the -targets that you are interested in. - - -Cleaning the build directory -============================ - -To remove all but kernel-related generated files, use -! make clean - -To remove all generated files, use -! make cleanall - -Both 'clean' and 'cleanall' won't remove any files from the _bin/_ -subdirectory. This makes the _bin/_ a safe place for files that are -unrelated to the build process, yet required for the integration stage, e.g., -binary data. - - -Controlling the verbosity of the build process -============================================== - -To understand the inner workings of the build process in more detail, you can -tell the build system to display each directory change by specifying - -! make VERBOSE_DIR= - -If you are interested in the arguments that are passed to each invocation of -'make', you can make them visible via - -! make VERBOSE_MK= - -Furthermore, you can observe each single shell-command invocation by specifying - -! make VERBOSE= - -Of course, you can combine these verboseness toggles for maximizing the noise. - - -Enabling parallel builds -======================== - -To utilize multiple CPU cores during the build process, you may invoke 'make' -with the '-j' argument. If manually specifying this argument becomes an -inconvenience, you may add the following line to your _etc/build.conf_ file: - -! MAKE += -j - -This way, the build system will always use '' CPUs for building. - - -Caching inter-library dependencies -================================== - -The build system allows to repeat the last build without performing any -library-dependency checks by using: - -! make again - -The use of this feature can significantly improve the work flow during -development because in contrast to source-codes, library dependencies rarely -change. So the time needed for re-creating inter-library dependencies at each -build can be saved. - - -Repository directory layout -########################### - -Each Genode repository has the following layout: - - Directory | Description - ------------------------------------------------------------ - 'doc/' | Documentation, specific for the repository - ------------------------------------------------------------ - 'etc/' | Default configuration of the build process - ------------------------------------------------------------ - 'mk/' | The build system - ------------------------------------------------------------ - 'include/' | Globally visible header files - ------------------------------------------------------------ - 'src/' | Source codes and target build descriptions - ------------------------------------------------------------ - 'lib/mk/' | Library build descriptions - - -Creating targets and libraries -############################## - -Target descriptions -=================== - -A good starting point is to look at the init target. The source code of init is -located at _os/src/init/_. In this directory, you will find a target description -file named _target.mk_. This file contains the building instructions and it is -usually very simple. The build process is controlled by defining the following -variables. - - -Build variables to be defined by you -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -:'TARGET': is the name of the binary to be created. This is the - only *mandatory variable* to be defined in a _target.mk_ file. - -:'REQUIRES': expresses the requirements that must be satisfied in order to - build the target. You find more details about the underlying mechanism in - Section [Specializations]. - -:'LIBS': is the list of libraries that are used by the target. - -:'SRC_CC': contains the list of '.cc' source files. The default search location - for source codes is the directory, where the _target.mk_ file resides. - -:'SRC_C': contains the list of '.c' source files. - -:'SRC_S': contains the list of assembly '.s' source files. - -:'SRC_BIN': contains binary data files to be linked to the target. - -:'INC_DIR': is the list of include search locations. Directories should - always be appended by using +=. Never use an assignment! - -:'EXT_OBJECTS': is a list of Genode-external objects or libraries. This - variable is mostly used for interfacing Genode with legacy software - components. - - -Rarely used variables ---------------------- - -:'CC_OPT': contains additional compiler options to be used for '.c' as - well as for '.cc' files. - -:'CC_CXX_OPT': contains additional compiler options to be used for the - C++ compiler only. - -:'CC_C_OPT': contains additional compiler options to be used for the - C compiler only. - - -Specifying search locations -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -When specifying search locations for header files via the 'INC_DIR' variable or -for source files via 'vpath', relative pathnames are illegal to use. Instead, -you can use the following variables to reference locations within the -source-code repository, where your target lives: - -:'REP_DIR': is the base directory of the current source-code repository. - Normally, specifying locations relative to the base of the repository is - never used by _target.mk_ files but needed by library descriptions. - -:'PRG_DIR': is the directory, where your _target.mk_ file resides. This - variable is always to be used when specifying a relative path. - - -Library descriptions -==================== - -In contrast to target descriptions that are scattered across the whole source -tree, library descriptions are located at the central place _lib/mk_. Each -library corresponds to a _.mk_ file. The base of the description file -is the name of the library. Therefore, no 'TARGET' variable needs to be set. -The source-code locations are expressed as '$(REP_DIR)'-relative 'vpath' -commands. - -Library-description files support the following additional declarations: - -:'SHARED_LIB = yes': declares that the library should be built as a shared - object rather than a static library. The resulting object will be called - _.lib.so_. - - -Specializations -=============== - -Building components for different platforms likely implicates portions of code -that are tied to certain aspects of the target platform. For example, a target -platform may be characterized by - -* A kernel API such as L4v2, Linux, L4.sec, -* A hardware architecture such as x86, ARM, Coldfire, -* A certain hardware facility such as a custom device, or -* Other properties such as software license requirements. - -Each of these attributes express a specialization of the build process. The -build system provides a generic mechanism to handle such specializations. - -The _programmer_ of a software component knows the properties on which his -software relies and thus, specifies these requirements in his build description -file. - -The _user/customer/builder_ decides to build software for a specific platform -and defines the platform specifics via the 'SPECS' variable per build -directory in _etc/specs.conf_. In addition to an (optional) _etc/specs.conf_ -file within the build directory, the build system incorporates the first -_etc/specs.conf_ file found in the repositories as configured for the -build directory. For example, for a 'linux_x86' build directory, the -_base-linux/etc/specs.conf_ file is used by default. The build directory's -'specs.conf' file can still be used to extend the 'SPECS' declarations, for -example to enable special features. - -Each '' in the 'SPECS' variable instructs the build system to - -* Include the 'make'-rules of a corresponding _base/mk/spec-.mk_ - file. This enables the customization of the build process for each platform. - -* Search for _.mk_ files in the _lib/mk//_ subdirectory. - This way, we can provide alternative implementations of one and the same - library interface for different platforms. - -Before a target or library gets built, the build system checks if the 'REQUIRES' -entries of the build description file are satisfied by entries of the 'SPECS' -variable. The compilation is executed only if each entry in the 'REQUIRES' -variable is present in the 'SPECS' variable as supplied by the build directory -configuration. - - -Building tools to be executed on the host platform -=================================================== - -Sometimes, software requires custom tools that are used to generate source -code or other ingredients for the build process, for example IDL compilers. -Such tools won't be executed on top of Genode but on the host platform -during the build process. Hence, they must be compiled with the tool chain -installed on the host, not the Genode tool chain. - -The Genode build system accommodates the building of such host tools as a side -effect of building a library or a target. Even though it is possible to add -the tool compilation step to a regular build description file, it is -recommended to introduce a dedicated pseudo library for building such tools. -This way, the rules for building host tools are kept separate from rules that -refer to Genode programs. By convention, the pseudo library should be named -__host_tools_ and the host tools should be built at -_/tool//_. With __, we refer to the name of the -software package the tool belongs to, e.g., qt5 or mupdf. To build a tool -named __, the pseudo library contains a custom make rule like the -following: - -! $(BUILD_BASE_DIR)/tool//: -! $(MSG_BUILD)$(notdir $@) -! $(VERBOSE)mkdir -p $(dir $@) -! $(VERBOSE)...build commands... - -To let the build system trigger the rule, add the custom target to the -'HOST_TOOLS' variable: - -! HOST_TOOLS += $(BUILD_BASE_DIR)/tool// - -Once the pseudo library for building the host tools is in place, it can be -referenced by each target or library that relies on the respective tools via -the 'LIBS' declaration. The tool can be invoked by referring to -'$(BUILD_BASE_DIR)/tool//tool'. - -For an example of using custom host tools, please refer to the mupdf package -found within the libports repository. During the build of the mupdf library, -two custom tools fontdump and cmapdump are invoked. The tools are built via -the _lib/mk/mupdf_host_tools.mk_ library description file. The actual mupdf -library (_lib/mk/mupdf.mk_) has the pseudo library 'mupdf_host_tools' listed -in its 'LIBS' declaration and refers to the tools relative to -'$(BUILD_BASE_DIR)'. - - -Building additional custom targets accompanying library or program -================================================================== - -There are cases when it is important to build additional targets -besides standard files built for library or program. Of course there -is no problem with writing specific make rules for commands that -generate those target files but for them to be built a proper -dependency must be specified. To achieve it those additional targets -should be added to 'CUSTOM_TARGET_DEPS' variable like e.g. in -iwl_firmware library from dde_linux repository: - -! CUSTOM_TARGET_DEPS += $(addprefix $(BIN_DIR)/,$(IMAGES)) - - -Automated integration and testing -################################# - -Genode's cross-kernel portability is one of the prime features of the -framework. However, each kernel takes a different route when it comes to -configuring, integrating, and booting the system. Hence, for using a particular -kernel, profound knowledge about the boot concept and the kernel-specific tools -is required. To streamline the testing of Genode-based systems across the many -different supported kernels, the framework comes equipped with tools that -relieve you from these peculiarities. - -Run scripts -=========== - -Using so-called run scripts, complete Genode systems can be described in a -concise and kernel-independent way. Once created, a run script can be used -to integrate and test-drive a system scenario directly from the build directory. -The best way to get acquainted with the concept is reviewing the run script -for the 'hello_tutorial' located at _hello_tutorial/run/hello.run_. -Let's revisit each step expressed in the _hello.run_ script: - -* Building the components needed for the system using the 'build' command. - This command instructs the build system to compile the targets listed in - the brace block. It has the same effect as manually invoking 'make' with - the specified argument from within the build directory. - -* Creating a new boot directory using the 'create_boot_directory' command. - The integration of the scenario is performed in a dedicated directory at - _/var/run//_. When the run script is finished, - this directory will contain all components of the final system. In the - following, we will refer to this directory as run directory. - -* Installing the Genode 'config' file into the run directory using the - 'install_config' command. The argument to this command will be written - to a file called 'config' at the run directory picked up by - Genode's init process. - -* Creating a bootable system image using the 'build_boot_image' command. - This command copies the specified list of files from the _/bin/_ - directory to the run directory and executes the platform-specific steps - needed to transform the content of the run directory into a bootable - form. This form depends on the actual base platform and may be an ISO - image or a bootable ELF image. - -* Executing the system image using the 'run_genode_until' command. Depending - on the base platform, the system image will be executed using an emulator. - For most platforms, Qemu is the tool of choice used by default. On Linux, - the scenario is executed by starting 'core' directly from the run - directory. The 'run_genode_until' command takes a regular expression - as argument. If the log output of the scenario matches the specified - pattern, the 'run_genode_until' command returns. If specifying 'forever' - as argument (as done in 'hello.run'), this command will never return. - If a regular expression is specified, an additional argument determines - a timeout in seconds. If the regular expression does not match until - the timeout is reached, the run script will abort. - -Please note that the _hello.run_ script does not contain kernel-specific -information. Therefore it can be executed from the build directory of any base -platform by using: - -! make run/hello - -When invoking 'make' with an argument of the form 'run/*', the build system -will look in all repositories for a run script with the specified name. The run -script must be located in one of the repositories 'run/' subdirectories and -have the file extension '.run'. - -For a more comprehensive run script, _os/run/demo.run_ serves as a good -example. This run script describes Genode's default demo scenario. As seen in -'demo.run', parts of init's configuration can be made dependent on the -platform's properties expressed as spec values. For example, the PCI driver -gets included in init's configuration only on platforms with a PCI bus. For -appending conditional snippets to the _config_ file, there exists the 'append_if' -command, which takes a condition as first and the snippet as second argument. -To test for a SPEC value, the command '[have_spec ]' is used as -condition. Analogously to how 'append_if' appends strings, there exists -'lappend_if' to append list items. The latter command is used to conditionally -include binaries to the list of boot modules passed to the 'build_boot_image' -command. - - -The run mechanism explained -=========================== - -Under the hood, run scripts are executed by an expect interpreter. When the -user invokes a run script via _make run/_, the build system invokes -the run tool at _/tool/run_ with the run script as argument. The -run tool is an expect script that has no other purpose than defining several -commands used by run scripts, including a platform-specific script snippet -called run environment ('env'), and finally including the actual run script. -Whereas _tool/run_ provides the implementations of generic and largely -platform-independent commands, the _env_ snippet included from the platform's -respective _base-/run/env_ file contains all platform-specific -commands. For reference, the most simplistic run environment is the one at -_base-linux/run/env_, which implements the 'create_boot_directory', -'install_config', 'build_boot_image', and 'run_genode_until' commands for Linux -as base platform. For the other platforms, the run environments are far more -elaborative and document precisely how the integration and boot concept works -on each platform. Hence, the _base-/run/env_ files are not only -necessary parts of Genode's tooling support but serve as resource for -peculiarities of using each kernel. - - -Using run script to implement test cases -======================================== - -Because run scripts are actually expect scripts, the whole arsenal of -language features of the Tcl scripting language is available to them. This -turns run scripts into powerful tools for the automated execution of test -cases. A good example is the run script at _libports/run/lwip.run_, which tests -the lwIP stack by running a simple Genode-based HTTP server on Qemu. It fetches -and validates a HTML page from this server. The run script makes use of a -regular expression as argument to the 'run_genode_until' command to detect the -state when the web server becomes ready, subsequently executes the 'lynx' shell -command to fetch the web site, and employs Tcl's support for regular -expressions to validate the result. The run script works across base platforms -that use Qemu as execution environment. - -To get the most out of the run mechanism, a basic understanding of the Tcl -scripting language is required. Furthermore the functions provided by -_tool/run_ and _base-/run/env_ should be studied. - - -Automated testing across base platforms -======================================= - -To execute one or multiple test cases on more than one base platform, there -exists a dedicated tool at _tool/autopilot_. Its primary purpose is the -nightly execution of test cases. The tool takes a list of platforms and of -run scripts as arguments and executes each run script on each platform. The -build directory for each platform is created at -_/tmp/autopilot./_ and the output of each run script is -written to a file called _..log_. On stderr, autopilot -prints the statistics about whether or not each run script executed -successfully on each platform. If at least one run script failed, autopilot -returns a non-zero exit code, which makes it straight forward to include -autopilot into an automated build-and-test environment. - - diff --git a/doc/depot.txt b/doc/depot.txt deleted file mode 100644 index abd9196a13..0000000000 --- a/doc/depot.txt +++ /dev/null @@ -1,514 +0,0 @@ - - - ============================ - Package management on Genode - ============================ - - - Norman Feske - - - -Motivation and inspiration -########################## - -The established system-integration work flow with Genode is based on -the 'run' tool, which automates the building, configuration, integration, -and testing of Genode-based systems. Whereas the run tool succeeds in -overcoming the challenges that come with Genode's diversity of kernels and -supported hardware platforms, its scalability is somewhat limited to -appliance-like system scenarios: The result of the integration process is -a system image with a certain feature set. Whenever requirements change, -the system image is replaced with a new created image that takes those -requirements into account. In practice, there are two limitations of this -system-integration approach: - -First, since the run tool implicitly builds all components required for a -system scenario, the system integrator has to compile all components from -source. E.g., if a system includes a component based on Qt5, one needs to -compile the entire Qt5 application framework, which induces significant -overhead to the actual system-integration tasks of composing and configuring -components. - -Second, general-purpose systems tend to become too complex and diverse to be -treated as system images. When looking at commodity OSes, each installation -differs with respect to the installed set of applications, user preferences, -used device drivers and system preferences. A system based on the run tool's -work flow would require the user to customize the run script of the system for -each tweak. To stay up to date, the user would need to re-create the -system image from time to time while manually maintaining any customizations. -In practice, this is a burden, very few end users are willing to endure. - -The primary goal of Genode's package management is to overcome these -scalability limitations, in particular: - -* Alleviating the need to build everything that goes into system scenarios - from scratch, -* Facilitating modular system compositions while abstracting from technical - details, -* On-target system update and system development, -* Assuring the user that system updates are safe to apply by providing the - ability to easily roll back the system or parts thereof to previous versions, -* Securing the integrity of the deployed software, -* Fostering a federalistic evolution of Genode systems, -* Low friction for existing developers. - -The design of Genode's package-management concept is largely influenced by Git -as well as the [https://nixos.org/nix/ - Nix] package manager. In particular -the latter opened our eyes to discover the potential that lies beyond the -package management employed in state-of-the art commodity systems. Even though -we considered adapting Nix for Genode and actually conducted intensive -experiments in this direction (thanks to Emery Hemingway who pushed forward -this line of work), we settled on a custom solution that leverages Genode's -holistic view on all levels of the operating system including the build system -and tooling, source structure, ABI design, framework API, system -configuration, inter-component interaction, and the components itself. Whereby -Nix is designed for being used on top of Linux, Genode's whole-systems view -led us to simplifications that eliminated the needs for Nix' powerful features -like its custom description language. - - -Nomenclature -############ - -When speaking about "package management", one has to clarify what a "package" -in the context of an operating system represents. Traditionally, a package -is the unit of delivery of a bunch of "dumb" files, usually wrapped up in -a compressed archive. A package may depend on the presence of other -packages. Thereby, a dependency graph is formed. To express how packages fit -with each other, a package is usually accompanied with meta data -(description). Depending on the package manager, package descriptions follow -certain formalisms (e.g., package-description language) and express -more-or-less complex concepts such as versioning schemes or the distinction -between hard and soft dependencies. - -Genode's package management does not follow this notion of a "package". -Instead of subsuming all deliverable content under one term, we distinguish -different kinds of content, each in a tailored and simple form. To avoid the -clash of the notions of the common meaning of a "package", we speak of -"archives" as the basic unit of delivery. The following subsections introduce -the different categories. -Archives are named with their version as suffix, appended via a slash. The -suffix is maintained by the author of the archive. The recommended naming -scheme is the use of the release date as version suffix, e.g., -'report_rom/2017-05-14'. - - -Raw-data archives -================= - -A raw-data archive contains arbitrary data that is - in contrast to executable -binaries - independent from the processor architecture. Examples are -configuration data, game assets, images, or fonts. The content of raw-data -archives is expected to be consumed by components at runtime. It is not -relevant for the build process for executable binaries. Each raw-data -archive contains merely a collection of data files. There is no meta data. - - -API archive -=========== - -An API archive has the structure of a Genode source-code repository. It may -contain all the typical content of such a source-code repository such as header -files (in the _include/_ subdirectory), source codes (in the _src/_ -subdirectory), library-description files (in the _lib/mk/_ subdirectory), or -ABI symbols (_lib/symbols/_ subdirectory). At the top level, a LICENSE file is -expected that clarifies the license of the contained source code. There is no -meta data contained in an API archive. - -An API archive is meant to provide _ingredients_ for building components. The -canonical example is the public programming interface of a library (header -files) and the library's binary interface in the form of an ABI-symbols file. -One API archive may contain the interfaces of multiple libraries. For example, -the interfaces of libc and libm may be contained in a single "libc" API -archive because they are closely related to each other. Conversely, an API -archive may contain a single header file only. The granularity of those -archives may vary. But they have in common that they are used at build time -only, not at runtime. - - -Source archive -============== - -Like an API archive, a source archive has the structure of a Genode -source-tree repository and is expected to contain all the typical content of -such a source repository along with a LICENSE file. But unlike an API archive, -it contains descriptions of actual build targets in the form of Genode's usual -'target.mk' files. - -In addition to the source code, a source archive contains a file -called 'used_apis', which contains a list of API-archive names with each -name on a separate line. For example, the 'used_apis' file of the 'report_rom' -source archive looks as follows: - -! base/2017-05-14 -! os/2017-05-13 -! report_session/2017-05-13 - -The 'used_apis' file declares the APIs needed to incorporate into the build -process when building the source archive. Hence, they represent _build-time_ -_dependencies_ on the specific API versions. - -A source archive may be equipped with a top-level file called 'api' containing -the name of exactly one API archive. If present, it declares that the source -archive _implements_ the specified API. For example, the 'libc/2017-05-14' -source archive contains the actual source code of the libc and libm as well as -an 'api' file with the content 'libc/2017-04-13'. The latter refers to the API -implemented by this version of the libc source package (note the differing -versions of the API and source archives) - - -Binary archive -============== - -A binary archive contains the build result of the equally-named source archive -when built for a particular architecture. That is, all files that would appear -at the _/bin/_ subdirectory when building all targets present in -the source archive. There is no meta data present in a binary archive. - -A binary archive is created out of the content of its corresponding source -archive and all API archives listed in the source archive's 'used_apis' file. -Note that since a binary archive depends on only one source archive, which -has no further dependencies, all binary archives can be built independently -from each other. -For example, a libc-using application needs the source code of the -application as well as the libc's API archive (the libc's header file and -ABI) but it does not need the actual libc library to be present. - - -Package archive -=============== - -A package archive contains an 'archives' file with a list of archive names -that belong together at runtime. Each listed archive appears on a separate line. -For example, the 'archives' file of the package archive for the window -manager 'wm/2018-02-26' looks as follows: - -! genodelabs/raw/wm/2018-02-14 -! genodelabs/src/wm/2018-02-26 -! genodelabs/src/report_rom/2018-02-26 -! genodelabs/src/decorator/2018-02-26 -! genodelabs/src/floating_window_layouter/2018-02-26 - -In contrast to the list of 'used_apis' of a source archive, the content of -the 'archives' file denotes the origin of the respective archives -("genodelabs"), the archive type, followed by the versioned name of the -archive. - -An 'archives' file may specify raw archives, source archives, or package -archives (as type 'pkg'). It thereby allows the expression of _runtime -dependencies_. If a package archive lists another package archive, it inherits -the content of the listed archive. This way, a new package archive may easily -customize an existing package archive. - -A package archive does not specify binary archives directly as they differ -between the architecture and are already referenced by the source archives. - -In addition to an 'archives' file, a package archive is expected to contain -a 'README' file explaining the purpose of the collection. - - -Depot structure -############### - -Archives are stored within a directory tree called _depot/_. The depot -is structured as follows: - -! /pubkey -! /download -! /src/// -! /api/// -! /raw/// -! /pkg/// -! /bin//// - -The stands for the origin of the contained archives. For example, the -official archives provided by Genode Labs reside in a _genodelabs/_ -subdirectory. Within this directory, there is a 'pubkey' file with the -user's public key that is used to verify the integrity of archives downloaded -from the user. The file 'download' specifies the download location as an URL. - -Subsuming archives in a subdirectory that correspond to their origin -(user) serves two purposes. First, it provides a user-local name space for -versioning archives. E.g., there might be two versions of a -'nitpicker/2017-04-15' source archive, one by "genodelabs" and one by -"nfeske". However, since each version resides under its origin's subdirectory, -version-naming conflicts between different origins cannot happen. Second, by -allowing multiple archive origins in the depot side-by-side, package archives -may incorporate archives of different origins, which fosters the goal of a -federalistic development, where contributions of different origins can be -easily combined. - -The actual archives are stored in the subdirectories named after the archive -types ('raw', 'api', 'src', 'bin', 'pkg'). Archives contained in the _bin/_ -subdirectories are further subdivided in the various architectures (like -'x86_64', or 'arm_v7'). - - -Depot management -################ - -The tools for managing the depot content reside under the _tool/depot/_ -directory. When invoked without arguments, each tool prints a brief -description of the tool and its arguments. - -Unless stated otherwise, the tools are able to consume any number of archives -as arguments. By default, they perform their work sequentially. This can be -changed by the '-j' argument, where denotes the desired level of -parallelization. For example, by specifying '-j4' to the _tool/depot/build_ -tool, four concurrent jobs are executed during the creation of binary archives. - - -Downloading archives -==================== - -The depot can be populated with archives in two ways, either by creating -the content from locally available source codes as explained by Section -[Automated extraction of archives from the source tree], or by downloading -ready-to-use archives from a web server. - -In order to download archives originating from a specific user, the depot's -corresponding user subdirectory must contain two files: - -:_pubkey_: contains the public key of the GPG key pair used by the creator - (aka "user") of the to-be-downloaded archives for signing the archives. The - file contains the ASCII-armored version of the public key. - -:_download_: contains the base URL of the web server where to fetch archives - from. The web server is expected to mirror the structure of the depot. - That is, the base URL is followed by a sub directory for the user, - which contains the archive-type-specific subdirectories. - -If both the public key and the download locations are defined, the download -tool can be used as follows: - -! ./tool/depot/download genodelabs/src/zlib/2018-01-10 - -The tool automatically downloads the specified archives and their -dependencies. For example, as the zlib depends on the libc API, the libc API -archive is downloaded as well. All archive types are accepted as arguments -including binary and package archives. Furthermore, it is possible to download -all binary archives referenced by a package archive. For example, the -following command downloads the window-manager (wm) package archive including -all binary archives for the 64-bit x86 architecture. Downloaded binary -archives are always accompanied with their corresponding source and used API -archives. - -! ./tool/depot/download genodelabs/pkg/x86_64/wm/2018-02-26 - -Archive content is not downloaded directly to the depot. Instead, the -individual archives and signature files are downloaded to a quarantine area in -the form of a _public/_ directory located in the root of Genode's source tree. -As its name suggests, the _public/_ directory contains data that is imported -from or to-be exported to the public. The download tool populates it with the -downloaded archives in their compressed form accompanied with their -signatures. - -The compressed archives are not extracted before their signature is checked -against the public key defined at _depot//pubkey_. If however the -signature is valid, the archive content is imported to the target destination -within the depot. This procedure ensures that depot content - whenever -downloaded - is blessed by a cryptographic signature of its creator. - - -Building binary archives from source archives -============================================= - -With the depot populated with source and API archives, one can use the -_tool/depot/build_ tool to produce binary archives. The arguments have the -form '/bin//' where '' stands for the targeted -CPU architecture. For example, the following command builds the 'zlib' -library for the 64-bit x86 architecture. It executes four concurrent jobs -during the build process. - -! ./tool/depot/build genodelabs/bin/x86_64/zlib/2018-01-10 -j4 - -Note that the command expects a specific version of the source archive as -argument. The depot may contain several versions. So the user has to decide, -which one to build. - -After the tool is finished, the freshly built binary archive can be found in -the depot within the _genodelabs/bin////_ subdirectory. -Only the final result of the built process is preserved. In the example above, -that would be the _zlib.lib.so_ library. - -For debugging purposes, it might be interesting to inspect the intermediate -state of the build. This is possible by adding 'KEEP_BUILD_DIR=1' as argument -to the build command. The binary's intermediate build directory can be -found besides the binary archive's location named with a '.build' suffix. - -By default, the build tool won't attempt to rebuild a binary archive that is -already present in the depot. However, it is possible to force a rebuild via -the 'REBUILD=1' argument. - - -Publishing archives -=================== - -Archives located in the depot can be conveniently made available to the public -using the _tool/depot/publish_ tool. Given an archive path, the tool takes -care of determining all archives that are implicitly needed by the specified -one, wrapping the archive's content into compressed tar archives, and signing -those. - -As a precondition, the tool requires you to possess the private key that -matches the _depot//pubkey_ file within your depot. The key pair should -be present in the key ring of your GNU privacy guard. - -To publish archives, one needs to specify the specific version to publish. -For example: - -! ./tool/depot/publish /pkg/x86_64/wm/2018-02-26 - -The command checks that the specified archive and all dependencies are present -in the depot. It then proceeds with the archiving and signing operations. For -the latter, the pass phrase for your private key will be requested. The -publish tool prints the information about the processed archives, e.g.: - -! publish /.../public//api/base/2018-02-26.tar.xz -! publish /.../public//api/framebuffer_session/2017-05-31.tar.xz -! publish /.../public//api/gems/2018-01-28.tar.xz -! publish /.../public//api/input_session/2018-01-05.tar.xz -! publish /.../public//api/nitpicker_gfx/2018-01-05.tar.xz -! publish /.../public//api/nitpicker_session/2018-01-05.tar.xz -! publish /.../public//api/os/2018-02-13.tar.xz -! publish /.../public//api/report_session/2018-01-05.tar.xz -! publish /.../public//api/scout_gfx/2018-01-05.tar.xz -! publish /.../public//bin/x86_64/decorator/2018-02-26.tar.xz -! publish /.../public//bin/x86_64/floating_window_layouter/2018-02-26.tar.xz -! publish /.../public//bin/x86_64/report_rom/2018-02-26.tar.xz -! publish /.../public//bin/x86_64/wm/2018-02-26.tar.xz -! publish /.../public//pkg/wm/2018-02-26.tar.xz -! publish /.../public//raw/wm/2018-02-14.tar.xz -! publish /.../public//src/decorator/2018-02-26.tar.xz -! publish /.../public//src/floating_window_layouter/2018-02-26.tar.xz -! publish /.../public//src/report_rom/2018-02-26.tar.xz -! publish /.../public//src/wm/2018-02-26.tar.xz - - -According to the output, the tool populates a directory called _public/_ -at the root of the Genode source tree with the to-be-published archives. -The content of the _public/_ directory is now ready to be copied to a -web server, e.g., by using rsync. - - -Automated extraction of archives from the source tree -##################################################### - -Genode users are expected to populate their local depot with content obtained -via the _tool/depot/download_ tool. However, Genode developers need a way to -create depot archives locally in order to make them available to users. Thanks -to the _tool/depot/extract_ tool, the assembly of archives does not need to be -a manual process. Instead, archives can be conveniently generated out of the -source codes present in the Genode source tree and the _contrib/_ directory. - -However, the granularity of splitting source code into archives, the -definition of what a particular API entails, and the relationship between -archives must be augmented by the archive creator as this kind of information -is not present in the source tree as is. This is where so-called "archive -recipes" enter the picture. An archive recipe defines the content of an -archive. Such recipes can be located at an _recipes/_ subdirectory of any -source-code repository, similar to how port descriptions and run scripts -are organized. Each _recipe/_ directory contains subdirectories for the -archive types, which, in turn, contain a directory for each archive. The -latter is called a _recipe directory_. - -Recipe directory ----------------- - -The recipe directory is named after the archive _omitting the archive version_ -and contains at least one file named _hash_. This file defines the version -of the archive along with a hash value of the archive's content -separated by a space character. By tying the version name to a particular hash -value, the _extract_ tool is able to detect the appropriate points in time -whenever the version should be increased due to a change of the archive's -content. - -API, source, and raw-data archive recipes ------------------------------------------ - -Recipe directories for API, source, or raw-data archives contain a -_content.mk_ file that defines the archive content in the form of make -rules. The content.mk file is executed from the archive's location within -the depot. Hence, the contained rules can refer to archive-relative files as targets. -The first (default) rule of the content.mk file is executed with a customized -make environment: - -:GENODE_DIR: A variable that holds the path to root of the Genode source tree, -:REP_DIR: A variable with the path to source code repository where the recipe - is located -:port_dir: A make function that returns the directory of a port within the - _contrib/_ directory. The function expects the location of the - corresponding port file as argument, for example, the 'zlib' recipe - residing in the _libports/_ repository may specify '$(REP_DIR)/ports/zlib' - to access the 3rd-party zlib source code. - -Source archive recipes contain simplified versions of the 'used_apis' and -(for libraries) 'api' files as found in the archives. In contrast to the -depot's counterparts of these files, which contain version-suffixed names, -the files contained in recipe directories omit the version suffix. This -is possible because the extract tool always extracts the _current_ version -of a given archive from the source tree. This current version is already -defined in the corresponding recipe directory. - -Package-archive recipes ------------------------ - -The recipe directory for a package archive contains the verbatim content of -the to-be-created package archive except for the _archives_ file. All other -files are copied verbatim to the archive. The content of the recipe's -_archives_ file may omit the version information from the listed ingredients. -Furthermore, the user part of each entry can be left blank by using '_' as a -wildcard. When generating the package archive from the recipe, the extract -tool will replace this wildcard with the user that creates the archive. - - -Convenience front-end to the extract, build tools -################################################# - -For developers, the work flow of interacting with the depot is most often the -combination of the _extract_ and _build_ tools whereas the latter expects -concrete version names as arguments. The _create_ tool accelerates this common -usage pattern by allowing the user to omit the version names. Operations -implicitly refer to the _current_ version of the archives as defined in -the recipes. - -Furthermore, the _create_ tool is able to manage version updates for the -developer. If invoked with the argument 'UPDATE_VERSIONS=1', it automatically -updates hash files of the involved recipes by taking the current date as -version name. This is a valuable assistance in situations where a commonly -used API changes. In this case, the versions of the API and all dependent -archives must be increased, which would be a labour-intensive task otherwise. -If the depot already contains an archive of the current version, the create -tools won't re-create the depot archive by default. Local modifications of -the source code in the repository do not automatically result in a new archive. -To ensure that the depot archive is current, one can specify 'FORCE=1' to -the create tool. With this argument, existing depot archives are replaced by -freshly extracted ones and version updates are detected. When specified for -creating binary archives, 'FORCE=1' normally implies 'REBUILD=1'. To prevent -the superfluous rebuild of binary archives whose source versions remain -unchanged, 'FORCE=1' can be combined with the argument 'REBUILD='. - - -Accessing depot content from run scripts -######################################## - -The depot tools are not meant to replace the run tool but rather to complement -it. When both tools are combined, the run tool implicitly refers to "current" -archive versions as defined for the archive's corresponding recipes. This way, -the regular run-tool work flow can be maintained while attaining a -productivity boost by fetching content from the depot instead of building it. - -Run scripts can use the 'import_from_depot' function to incorporate archive -content from the depot into a scenario. The function must be called after the -'create_boot_directory' function and takes any number of pkg, src, or raw -archives as arguments. An archive is specified as depot-relative path of the -form '//name'. Run scripts may call 'import_from_depot' -repeatedly. Each argument can refer to a specific version of an archive or -just the version-less archive name. In the latter case, the current version -(as defined by a corresponding archive recipe in the source tree) is used. - -If a 'src' archive is specified, the run tool integrates the content of -the corresponding binary archive into the scenario. The binary archives -are selected according the spec values as defined for the build directory. - diff --git a/doc/getting_started.txt b/doc/getting_started.txt deleted file mode 100644 index e9679cd351..0000000000 --- a/doc/getting_started.txt +++ /dev/null @@ -1,154 +0,0 @@ - - ============================= - How to start exploring Genode - ============================= - - Norman Feske - - -Abstract -######## - -This guide is meant to provide you a painless start with using the Genode OS -Framework. It explains the steps needed to get a simple demo system running -on Linux first, followed by the instructions on how to run the same scenario -on a microkernel. - - -Quick start to build Genode for Linux -##################################### - -The best starting point for exploring Genode is to run it on Linux. Make sure -that your system satisfies the following requirements: - -* GNU Make version 3.81 or newer -* 'libsdl2-dev', 'libdrm-dev', and 'libgbm-dev' (needed to run interactive - system scenarios directly on Linux) -* 'tclsh' and 'expect' -* 'byacc' (only needed for the L4/Fiasco kernel) -* 'qemu' and 'xorriso' (for testing non-Linux platforms via Qemu) - -For using the entire collection of ported 3rd-party software, the following -packages should be installed additionally: 'autoconf2.64', 'autogen', 'bison', -'flex', 'g++', 'git', 'gperf', 'libxml2-utils', 'subversion', and 'xsltproc'. - -Your exploration of Genode starts with obtaining the source code of the -[https://sourceforge.net/projects/genode/files/latest/download - latest version] -of the framework. For detailed instructions and alternatives to the -download from Sourceforge please refer to [https://genode.org/download]. -Furthermore, you will need to install the official Genode tool chain, which -you can download at [https://genode.org/download/tool-chain]. - -The Genode build system never touches the source tree but generates object -files, libraries, and programs in a dedicated build directory. We do not have a -build directory yet. For a quick start, let us create one for the Linux base -platform: - -! cd -! ./tool/create_builddir x86_64 - -This creates a new build directory for building x86_64 binaries in './build'. -The build system creates unified binaries that work on the given -architecture independent from the underlying base platform, in this case Linux. - -Now change into the fresh build directory: - -! cd build/x86_64 - -Please uncomment the following line in 'etc/build.conf' to make the -build process as smooth as possible. - -! RUN_OPT += --depot-auto-update - -To give Genode a try, build and execute a simple demo scenario via: - -! make KERNEL=linux BOARD=linux run/demo - -By invoking 'make' with the 'run/demo' argument, all components needed by the -demo scenario are built and the demo is executed. This includes all components -which are implicitly needed by the base platform. The base platform that the -components will be executed upon on is selected via the 'KERNEL' and 'BOARD' -variables. If you are interested in looking behind the scenes of the demo -scenario, please refer to 'doc/build_system.txt' and the run script at -'os/run/demo.run'. - - -Using platforms other than Linux -================================ - -Running Genode on Linux is the most convenient way to get acquainted with the -framework. However, the point where Genode starts to shine is when used as the -user land executed on a microkernel. The framework supports a variety of -different kernels such as L4/Fiasco, L4ka::Pistachio, OKL4, and NOVA. Those -kernels largely differ in terms of feature sets, build systems, tools, and boot -concepts. To relieve you from dealing with those peculiarities, Genode provides -you with an unified way of using them. For each kernel platform, there exists -a dedicated description file that enables the 'prepare_port' tool to fetch and -prepare the designated 3rd-party sources. Just issue the following command -within the toplevel directory of the Genode source tree: - -! ./tool/ports/prepare_port - -Note that each 'base-' directory comes with a 'README' file, which -you should revisit first when exploring the base platform. Additionally, most -'base-' directories provide more in-depth information within their -respective 'doc/' subdirectories. - -For the VESA driver on x86, the x86emu library is required and can be -downloaded and prepared by again invoking the 3rd-party sources preparation -tool: - -! ./tool/ports/prepare_port x86emu - -On x86 base platforms the GRUB2 boot loader is required and can be -downloaded and prepared by invoking: - -! ./tool/ports/prepare_port grub2 - -Now that the base platform is prepared, the 'create_builddir' tool can be used -to create a build directory for your architecture of choice by giving the -architecture as argument. To see the list of available architecture, execute -'create_builddir' with no arguments. Note, that not all kernels support all -architectures. - -For example, to give the demo scenario a spin on the OKL4 kernel, the following -steps are required: - -# Download the kernel: - ! cd - ! ./tool/ports/prepare_port okl4 -# Create a build directory - ! ./tool/create_builddir x86_32 -# Uncomment the following line in 'x86_32/etc/build.conf' - ! REPOSITORIES += $(GENODE_DIR)/repos/libports -# Build and execute the demo using Qemu - ! make -C build/x86_32 KERNEL=okl4 BOARD=pc run/demo - -The procedure works analogously for the other base platforms. You can, however, -reuse the already created build directory and skip its creation step if the -architecture matches. - - -How to proceed with exploring Genode -#################################### - -Now that you have taken the first steps into using Genode, you may seek to -get more in-depth knowledge and practical experience. The foundation for doing -so is a basic understanding of the build system. The documentation at -'build_system.txt' provides you with the information about the layout of the -source tree, how new components are integrated, and how complete system -scenarios can be expressed. Equipped with this knowledge, it is time to get -hands-on experience with creating custom Genode components. A good start is the -'hello_tutorial', which shows you how to implement a simple client-server -scenario. To compose complex scenarios out of many small components, the -documentation of the Genode's configuration concept at 'os/doc/init.txt' is an -essential reference. - -Certainly, you will have further questions on your way with exploring Genode. -The best place to get these questions answered is the Genode mailing list. -Please feel welcome to ask your questions and to join the discussions: - -:Genode Mailing Lists: - - [https://genode.org/community/mailing-lists] - diff --git a/doc/gsoc_2012.txt b/doc/gsoc_2012.txt deleted file mode 100644 index 0f11fa4229..0000000000 --- a/doc/gsoc_2012.txt +++ /dev/null @@ -1,236 +0,0 @@ - - - ========================== - Google Summer of Code 2012 - ========================== - - -Genode Labs has applied as mentoring organization for the Google Summer of Code -program in 2012. This document summarizes all information important to Genode's -participation in the program. - -:[http://www.google-melange.com/gsoc/homepage/google/gsoc2012]: - Visit the official homepage of the Google Summer of Code program. - -*Update* Genode Labs was not accepted as mentoring organization for GSoC 2012. - - -Application of Genode Labs as mentoring organization -#################################################### - -:Organization ID: genodelabs - -:Organization name: Genode Labs - -:Organization description: - - Genode Labs is a self-funded company founded by the original creators of the - Genode OS project. Its primary mission is to bring the Genode operating-system - technology, which started off as an academic research project, to the real - world. At present, Genode Labs is the driving force behind the Genode OS - project. - -:Organization home page url: - - http://www.genode-labs.com - -:Main organization license: - - GNU General Public License version 2 - -:Admins: - - nfeske, chelmuth - -:What is the URL for your Ideas page?: - - [http://genode.org/community/gsoc_2012] - -:What is the main IRC channel for your organization?: - - #genode - -:What is the main development mailing list for your organization?: - - genode-main@lists.sourceforge.net - -:Why is your organization applying to participate? What do you hope to gain?: - - During the past three months, our project underwent the transition from a - formerly company-internal development to a completely open and transparent - endeavour. By inviting a broad community for participation in shaping the - project, we hope to advance Genode to become a broadly used and recognised - technology. GSoC would help us to build our community. - - The project has its roots at the University of Technology Dresden where the - Genode founders were former members of the academic research staff. We have - a long and successful track record with regard to supervising students. GSoC - would provide us with the opportunity to establish and cultivate - relationships to new students and to spawn excitement about Genode OS - technology. - -:Does your organization have an application templateo?: - - GSoC student projects follow the same procedure as regular community - contributions, in particular the student is expected to sign the Genode - Contributor's Agreement. (see [http://genode.org/community/contributions]) - -:What criteria did you use to select your mentors?: - - We selected the mentors on the basis of their long-time involvement with the - project and their time-tested communication skills. For each proposed working - topic, there is least one stakeholder with profound technical background within - Genode Labs. This person will be the primary contact person for the student - working on the topic. However, we will encourgage the student to make his/her - development transparant to all community members (i.e., via GitHub). So - So any community member interested in the topic is able to bring in his/her - ideas at any stage of development. Consequently, in practive, there will be - multiple persons mentoring each students. - -:What is your plan for dealing with disappearing students?: - - Actively contact them using all channels of communication available to us, - find out the reason for disappearance, trying to resolve the problems. (if - they are related to GSoC or our project for that matter). - -:What is your plan for dealing with disappearing mentors?: - - All designated mentors are local to Genode Labs. So the chance for them to - disappear to very low. However, if a mentor disappears for any serious reason - (i.e., serious illness), our organization will provide a back-up mentor. - -:What steps will you take to encourage students to interact with your community?: - - First, we discussed GSoC on our mailing list where we received an overly - positive response. We checked back with other Open-Source projects related to - our topics, exchanged ideas, and tried to find synergies between our - respective projects. For most project ideas, we have created issues in our - issue tracker to collect technical information and discuss the topic. - For several topics, we already observed interests of students to participate. - - During the work on the topics, the mentors will try to encourage the - students to play an active role in discussions on our mailing list, also on - topics that are not strictly related to the student project. We regard an - active participation as key to to enable new community members to develop a - holistic view onto our project and gather a profound understanding of our - methodologies. - - Student projects will be carried out in a transparent fashion at GitHub. - This makes it easy for each community member to get involved, discuss - the rationale behind design decisions, and audit solutions. - - -Topics -###### - -While discussing GSoC participation on our mailing list, we identified the -following topics as being well suited for GSoC projects. However, if none of -those topics receives resonance from students, there is more comprehensive list -of topics available at our road map and our collection of future challenges: - -:[http://genode.org/about/road-map]: Road-map -:[http://genode.org/about/challenges]: Challenges - - -Combining Genode with the HelenOS/SPARTAN kernel -================================================ - -[http://www.helenos.org - HelenOS] is a microkernel-based multi-server OS -developed at the university of Prague. It is based on the SPARTAN microkernel, -which runs on a wide variety of CPU architectures including Sparc, MIPS, and -PowerPC. This broad platform support makes SPARTAN an interesting kernel to -look at alone. But a further motivation is the fact that SPARTAN does not -follow the classical L4 road, providing a kernel API that comes with an own -terminology and different kernel primitives. This makes the mapping of -SPARTAN's kernel API to Genode a challenging endeavour and would provide us -with feedback regarding the universality of Genode's internal interfaces. -Finally, this project has the potential to ignite a further collaboration -between the HelenOS and Genode communities. - - -Block-level encryption -====================== - -Protecting privacy is one of the strongest motivational factors for developing -Genode. One pivotal element with that respect is the persistence of information -via block-level encryption. For example, to use Genode every day at Genode -Labs, it's crucial to protect the confidentiality of some information that's -not part of the Genode code base, e.g., emails and reports. There are several -expansion stages imaginable to reach the goal and the basic building blocks -(block-device interface, ATA/SATA driver for Qemu) are already in place. - -:[https://github.com/genodelabs/genode/issues/55 - Discuss the issue...]: - - -Virtual NAT -=========== - -For sharing one physical network interface among multiple applications, Genode -comes with a component called nic_bridge, which implements proxy ARP. Through -this component, each application receives a distinct (virtual) network -interface that is visible to the real network. I.e., each application requests -an IP address via a DHCP request at the local network. An alternative approach -would be a component that implements NAT on Genode's NIC session interface. -This way, the whole Genode system would use only one IP address visible to the -local network. (by stacking multiple nat and nic_bridge components together, we -could even form complex virtual networks inside a single Genode system) - -The implementation of the virtual NAT could follow the lines of the existing -nic_bridge component. For parsing network packets, there are already some handy -utilities available (at os/include/net/). - -:[https://github.com/genodelabs/genode/issues/114 - Discuss the issue...]: - - -Runtime for the Go or D programming language -============================================ - -Genode is implemented in C++. However, we are repeatedly receiving requests -for offering more safe alternatives for implementing OS-level functionality -such as device drivers, file systems, and other protocol stacks. The goals -for this project are to investigate the Go and D programming languages with -respect to their use within Genode, port the runtime of of those languages -to Genode, and provide a useful level of integration with Genode. - - -Block cache -=========== - -Currently, there exists only the iso9660 server that is able to cache block -accesses. A generic solution for caching block-device accesses would be nice. -One suggestion is a component that requests a block session (routed to a block -device driver) as back end and also announces a block service (front end) -itself. Such a block-cache server waits for requests at the front end and -forwards them to the back end. But it uses its own memory to cache blocks. - -The first version could support only read-only block devices (such as CDROM) by -caching the results of read accesses. In this version, we already need an -eviction strategy that kicks in once the block cache gets saturated. For a -start this could be FIFO or LRU (least recently used). - -A more sophisticated version would support write accesses, too. Here we need a -way to sync blocks to the back end at regular intervals in order to guarantee -that all block-write accesses are becoming persistent after a certain time. We -would also need a way to explicitly flush the block cache (i.e., when the -front-end block session gets closed). - -:[https://github.com/genodelabs/genode/issues/113 - Discuss the issue...]: - - -; _Since Genode Labs was not accepted as GSoC mentoring organization, the_ -; _following section has become irrelevant. Hence, it is commented-out_ -; -; Student applications -; #################### -; -; The formal steps for applying to the GSoC program will be posted once Genode -; Labs is accepted as mentoring organization. If you are a student interested -; in working on a Genode-related GSoC project, now is a good time to get -; involved with the Genode community. The best way is joining the discussions -; at our mailing list and the issue tracker. This way, you will learn about -; the currently relevant topics, our discussion culture, and the people behind -; the project. -; -; :[http://genode.org/community/mailing-lists]: Join our mailing list -; :[https://github.com/genodelabs/genode/issues]: Discuss issues around Genode - diff --git a/doc/porting_guide.txt b/doc/porting_guide.txt deleted file mode 100644 index dd2fe85597..0000000000 --- a/doc/porting_guide.txt +++ /dev/null @@ -1,1451 +0,0 @@ - ==================== - Genode Porting Guide - ==================== - - Genode Labs GmbH - - -Overview -######## - -This document describes the basic workflows for porting applications, libraries, -and device drivers to the Genode framework. It consists of the following -sections: - -:[http:porting_applications - Porting third-party code to Genode]: - Overview of the general steps needed to use 3rd-party code on Genode. - -:[http:porting_dosbox - Porting a program to natively run on Genode]: - Step-by-step description of applying the steps described in the first - section to port an application, using DosBox as an example. - -:[http:porting_libraries - Native Genode port of a library]: - Many 3rd-party applications have library dependencies. This section shows - how to port a library using SDL_net (needed by DosBox) as an example. - -:[http:porting_noux_packages - Porting an application to Genode's Noux runtime]: - On Genode, there exists an environment specially tailored to execute - command-line based Unix software, the so-called Noux runtime. This section - demonstrates how to port and execute the tar program within Noux. - -:[http:porting_device_drivers - Porting devices drivers]: - This chapter describes the concepts of how to port a device driver to the - Genode framework. It requires the basic knowledge introduced in the previous - chapters and should be read last. - -Before reading this guide, it is strongly advised to read the "The Genode -Build System" documentation: - -:Build-system manual: - - [http://genode.org/documentation/developer-resources/build_system] - - -Porting third-party code to Genode -################################## - -Porting an existing program or library to Genode is for the most part a -straight-forward task and depends mainly on the complexity of the program -itself. Genode provides a fairly complete libc based on FreeBSD's libc whose -functionality can be extended by so-called libc plugins. If the program one -wants to port solely uses standard libc functions, porting becomes easy. Every -porting task involves usually the same steps which are outlined below. - - -Steps in porting applications to Genode -======================================= - -# Check requirements/dependencies (e.g. on Linux) - - The first step is gathering information about the application, - e.g. what functionality needs to be provided by the target system and - which libraries does it use. - -# Create a port file - - Prepare the source code of the application for the use within Genode. The - Genode build-system infrastructure uses fetch rules, so called port files, - which declare where the source is obtained from, what patches are applied - to the source code, and where the source code will be stored and - configured. - -# Check platform dependent code and create stub code - - This step may require changes to the original source code - of the application to be compilable for Genode. At this point, it - is not necessary to provide a working implementation for required - functions. Just creating stubs of the various functions is fine. - -# Create build-description file - - To compile the application we need build rules. Within these rules - we also declare all dependencies (e.g. libraries) that are needed - by it. The location of these rules depends on the type - of the application. Normal programs on one hand use a _target.mk_ file, - which is located in the program directory (e.g. _src/app/foobar_) - within a given Genode repository. Libraries on the other hand use - one or more _.mk_ files that are placed in the _lib/mk_ - directory of a Genode repository. In addition, libraries have to - provide _import-.mk_ files. Amongst other things, these - files are used by applications to find the associated header files - of a library. The import files are placed in the _lib/import_ - directory. - -# Create a run script to ease testing - - To ease the testing of applications, it is reasonable to write a run script - that creates a test scenario for the application. This run script is used - to automatically build all components of the Genode OS framework that are - needed to run the application as well as the application itself. Testing - the application on any of the kernels supported by Genode becomes just a - matter of executing the run script. - -# Compile the application - - The ported application is compiled from within the respective build - directory like any other application or component of Genode. The build - system of Genode uses the build rules created in the fourth step. - -# Run the application - - While porting an application, easy testing is crucial. By using the run script - that was written in the fifth step we reduce the effort. - -# Debug the application - - In most cases, a ported application does not work right away. We have to - debug misbehaviour and implement certain functionality in the platform-depending - parts of the application so that is can run on Genode. There are - several facilities available on Genode that help in the process. These are - different on each Genode platform but basically break down to using either a - kernel debugger (e.g., JDB on Fiasco.OC) or 'gdb(1)'. The reader of this guide - is advised to take a look at the "User-level debugging on Genode via GDB" - documentation. - -_The order of step 1-4 is not mandatory but is somewhat natural._ - - -Porting a program to natively run on Genode -########################################### - -As an example on how to create a native port of a program for Genode, we will -describe the porting of DosBox more closely. Hereby, each of the steps -outlined in the previous section will be discussed in detail. - - -Check requirements/dependencies -=============================== - -In the first step, we build DosBox for Linux/x86 to obtain needed information. -Nowadays, most applications use a build-tool like Autotools or something -similar that will generate certain files (e.g., _config.h_). These files are -needed to successfully compile the program. Naturally they are required on -Genode as well. Since Genode does not use the original build tool of the -program for native ports, it is appropriate to copy those generated files -and adjust them later on to match Genode's settings. - -We start by checking out the source code of DosBox from its subversion repository: - -! $ svn export http://svn.code.sf.net/p/dosbox/code-0/dosbox/trunk@3837 dosbox-svn-3837 -! $ cd dosbox-svn-3837 - -At this point, it is helpful to disable certain options that are not -available or used on Genode just to keep the noise down: - -! $ ./configure --disable-opengl -! $ make > build.log 2>&1 - -After the DosBox binary is successfully built, we have a log file -(build.log) of the whole build process at our disposal. This log file will -be helpful later on when the _target.mk_ file needs to be created. In -addition, we will inspect the DosBox binary: - -! $ readelf -d -t src/dosbox|grep NEEDED -! 0x0000000000000001 (NEEDED) Shared library: [libasound.so.2] -! 0x0000000000000001 (NEEDED) Shared library: [libdl.so.2] -! 0x0000000000000001 (NEEDED) Shared library: [libpthread.so.0] -! 0x0000000000000001 (NEEDED) Shared library: [libSDL-1.2.so.0] -! 0x0000000000000001 (NEEDED) Shared library: [libpng12.so.0] -! 0x0000000000000001 (NEEDED) Shared library: [libz.so.1] -! 0x0000000000000001 (NEEDED) Shared library: [libSDL_net-1.2.so.0] -! 0x0000000000000001 (NEEDED) Shared library: [libX11.so.6] -! 0x0000000000000001 (NEEDED) Shared library: [libstdc++.so.6] -! 0x0000000000000001 (NEEDED) Shared library: [libm.so.6] -! 0x0000000000000001 (NEEDED) Shared library: [libgcc_s.so.1] -! 0x0000000000000001 (NEEDED) Shared library: [libc.so.6] - -Using _readelf_ on the binary shows all direct dependencies. We now know -that at least libSDL, libSDL_net, libstdc++, libpng, libz, and -libm are required by DosBox. The remaining libraries are mostly -mandatory on Linux and do not matter on Genode. Luckily all of these -libraries are already available on Genode. For now all we have to do is to -keep them in mind. - - -Creating the port file -====================== - -Since DosBox is an application, which depends on several ported -libraries (e.g., libSDL), the _ports_ repository within the Genode -source tree is a natural fit. On that account, the port file -_ports/ports/dosbox.port_ is created. - -For DosBox the _dosbox.port_ looks as follows: - -! LICENSE := GPLv2 -! VERSION := svn -! DOWNLOADS := dosbox.svn -! -! URL(dosbox) := http://svn.code.sf.net/p/dosbox/code-0/dosbox/trunk -! DIR(dosbox) := src/app/dosbox -! REV(dosbox) := 3837 - -First, we define the license, the version and the type of the source code -origin. In case of DosBox, we checkout the source code from a Subversion -repository. This is denoted by the '.svn' suffix of the item specified in -the 'DOWNLOADS' declaration. Other valid types are 'file' (a plain file), -'archive' (an archive of the types tar.gz, tar.xz, tgz, tar.bz2, or zip) -or 'git' (a Git repository). -To checkout the source code out from the Subversion repository, we also need -its URL, the revision we want to check out and the destination directory -that will contain the sources afterwards. These declarations are mandatory and -must always be specified. Otherwise the preparation of the port will fail. - -! PATCHES := $(addprefix src/app/dosbox/patches/,\ -! $(notdir $(wildcard $(REP_DIR)/src/app/dosbox/patches/*.patch))) -! -! PATCH_OPT := -p2 -d src/app/dosbox - -As next step, we declare all patches that are needed for the DosBox port. -Since in this case, the patches are using a different path format, we have -to override the default patch settings by defining the _PATCH_OPT_ variable. - -Each port file comes along with a hash file. This hash is generated by taking -several sources into account. For one, the port file, each patch and the -port preparation tool (_tool/ports/prepare_port_) are the ingredients for -the hash value. If any of these files is changed, a new hash will be generated, -For now, we just write "dummy" in the '_ports/ports/dosbox.hash_ file. - -The DosBox port can now be prepared by executing - -! $ /tool/ports/prepare_port dosbox - -However, we get the following error message: - -! Error: /ports/dosbox.port is out of date, expected - -We get this message because we had specified the "dummy" hash value in -the _dosbox.hash_ file. The prepare_port tool computes a fingerprint -of the actual version of the port and compares this fingerprint with the -hash value specified in _dosbox.hash_. The computed fingerprint can -be found at _/contrib/dosbox-dummy/dosbox.hash_. In the final -step of the port, we will replace the dummy fingerprint with the actual -fingerprint of the port. But before finalizing the porting work, it is -practical to keep using the dummy hash and suppress the fingerprint check. -This can be done by adding 'CHECK_HASH=no' as argument to the prepare_port tool: - -! $ /tool/ports/prepare-port dosbox CHECK_HASH=no - - -Check platform-dependent code -============================= - -At this point, it is important to spot platform-dependent source files or -rather certain functions that are not yet available on Genode. These source -files should be omitted. Of course they may be used as a guidance when -implementing the functionality for Genode later on, when creating the -_target.mk_ file. In particular the various 'cdrom_ioctl_*.cpp' files are such -candidates in this example. - - -Creating the build Makefile -=========================== - -Now it is time to write the build rules into the _target.mk_, which will be -placed in _ports/src/app/dosbox_. - -Armed with the _build.log_ that we created while building DosBox on Linux, -we assemble a list of needed source files. If an application just -uses a simple Makefile and not a build tool, it might be easier to just -reuse the contents of this Makefile instead. - -First of all, we create a shortcut for the source directory of DosBox by calling -the 'select_from_ports' function: - -! DOSBOX_DIR := $(call select_from_ports,dosbox)/src/app/dosbox - -Under the hood, the 'select_from_ports' function looks up the -fingerprint of the specified port by reading the corresponding -.hash file. It then uses this hash value to construct the -directory path within the _contrib/_ directory that belongs to -the matching version of the port. If there is no hash file that matches the -port name, or if the port directory does not exist, the build system -will back out with an error message. - -Examining the log file leaves us with the following list of source files: - -! SRC_CC_cpu = $(notdir $(wildcard $(DOSBOX_DIR)/src/cpu/*.cpp)) -! SRC_CC_debug = $(notdir $(wildcard $(DOSBOX_DIR)/src/debug/*.cpp)) -! FILTER_OUT_dos = cdrom_aspi_win32.cpp cdrom_ioctl_linux.cpp cdrom_ioctl_os2.cpp \ -! cdrom_ioctl_win32.cpp -! SRC_CC_dos = $(filter-out $(FILTER_OUT_dos), \ -! $(notdir $(wildcard $(DOSBOX_DIR)/src/dos/*.cpp))) -! […] -! SRC_CC = $(notdir $(DOSBOX_DIR)/src/dosbox.cpp) -! SRC_CC += $(SRC_CC_cpu) $(SRC_CC_debug) $(SRC_CC_dos) $(SRC_CC_fpu) \ -! $(SRC_CC_gui) $(SRC_CC_hw) $(SRC_CC_hw_ser) $(SRC_CC_ints) \ -! $(SRC_CC_ints) $(SRC_CC_misc) $(SRC_CC_shell) -! -! vpath %.cpp $(DOSBOX_DIR)/src -! vpath %.cpp $(DOSBOX_DIR)/src/cpu -! […] - -_The only variable here that is actually evaluated by Genode's build-system is_ -'SRC_CC'. _The rest of the variables are little helpers that make our_ -_life more comfortable._ - -In this case, it is mandatory to use GNUMake's 'notdir' file name function -because otherwise the compiled object files would be stored within -the _contrib_ directories. Genode runs on multiple platforms with varying -architectures and mixing object files is considered harmful, which can happen -easily if the application is build from the original source directory. That's -why you have to use a build directory for each platform. The Genode build -system will create the needed directory hierarchy within the build directory -automatically. By combining GNUMake's 'notdir' and 'wildcard' function, we -can assemble a list of all needed source files without much effort. We then -use 'vpath' to point GNUMake to the right source file within the dosbox -directory. - -The remaining thing to do now is setting the right include directories and proper -compiler flags: - -! INC_DIR += $(PRG_DIR) -! INC_DIR += $(DOSBOX_DIR)/include -! INC_DIR += $(addprefix $(DOSBOX_DIR)/src, cpu debug dos fpu gui hardware \ -! hardware/serialport ints misc shell) - -'PRG_DIR' _is a special variable of Genode's build-system_ -_and its value is always the absolute path to the directory containing_ -_the 'target.mk' file._ - -We copy the _config.h_ file, which was generated in the first step to this -directory and change certain parts of it to better match Genode's -environment. Below is a skimmed diff of these changes: - -! --- config.h.orig 2013-10-21 15:27:45.185719517 +0200 -! +++ config.h 2013-10-21 15:36:48.525727975 +0200 -! @@ -25,7 +25,8 @@ -! /* #undef AC_APPLE_UNIVERSAL_BUILD */ -! -! /* Compiling on BSD */ -! -/* #undef BSD */ -! +/* Genode's libc is based on FreeBSD 8.2 */ -! +#define BSD 1 -! -! […] -! -! /* The type of cpu this target has */ -! -#define C_TARGETCPU X86_64 -! +/* we define it ourself */ -! +/* #undef C_TARGETCPU */ -! -! […] - -Thereafter, we specify the compiler flags: - -! CC_OPT = -DHAVE_CONFIG_H -D_GNU_SOURCE=1 -D_REENTRANT -! ifeq ($(filter-out $(SPECS),x86_32),) -! INC_DIR += $(PRG_DIR)/x86_32 -! CC_OPT += -DC_TARGETCPU=X86 -! else ifeq ($(filter-out $(SPECS),x86_64),) -! INC_DIR += $(PRG_DIR)/x86_64 -! CC_OPT += -DC_TARGETCPU=X86_64 -! endif -! -! CC_WARN = -Wall -! #CC_WARN += -Wno-unused-variable -Wno-unused-function -Wno-switch \ -! -Wunused-value -Wno-unused-but-set-variable - -As noted in the commentary seen in the diff we define 'C_TARGETCPU' -and adjust the include directories ourselves according to the target -architecture. - -While debugging, compiler warnings for 3rd-party code are really helpful but -tend to be annoying after the porting work is finished, we can -remove the hashmark to keep the compiler from complaining too -much. - -Lastly, we need to add the required libraries, which we acquired in step 1: - -! LIBS += libc libm libpng sdl stdcxx zlib -! LIBS += libc_lwip_nic_dhcp config_args - -In addition to the required libraries, a few Genode specific -libraries are also needed. These libraries implement certain -functions in the libc via the libc's plugin mechanism. -libc_lwip_nic_dhcp, for example, is used to connect the BSD socket interface -to a NIC service such as a network device driver. - - -Creating the run script -======================= - -To ease compiling, running and debugging DosBox, we create a run script -at _ports/run/dosbox.run_. - -First, we specify the components that need to be built - -! set build_components { -! core init drivers/audio drivers/framebuffer drivers/input -! drivers/pci drivers/timer app/dosbox -! } -! build $build_components - -and instruct _tool/run_ to create the boot directory that hosts -all binaries and files which belong to the DosBox scenario. - -As the name 'build_components' suggests, you only have to declare -the components of Genode, which are needed in this scenario. All -dependencies of DosBox (e.g. libSDL) will be built before DosBox -itself. - -Nextm we provide the scenario's configuration 'config': - -! append config { -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! } -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! } -! install_config $config - -The _config_ file will be used by the init program to start all -components and application of the scenario, including DosBox. - -Thereafter we declare all boot modules: - -! set boot_modules { -! core init timer audio_drv fb_drv ps2_drv ld.lib.so -! libc.lib.so libm.lib.so -! lwip.lib.so libpng.lib.so stdcxx.lib.so sdl.lib.so -! pthread.lib.so zlib.lib.so dosbox dosbox.tar -! } -! build_boot_image $boot_modules - -The boot modules comprise all binaries and other files like -the tar archive that contains DosBox' configuration file _dosbox.conf_ -that are needed for this scenario to run sucessfully. - -Finally, we set certain options, which are used when Genode is executed -in Qemu and instruct _tool/run_ to keep the scenario executing as long -as it is not manually stopped: - -! append qemu_args " -m 256 -soundhw ac97 " -! run_genode_until forever - -_It is reasonable to write the run script in a way that makes it possible_ -_to use it for multiple Genode platforms. Debugging is often done on_ -_Genode/Linux or on another Genode platform running in Qemu but testing_ -_is normally done using actual hardware._ - - -Compiling the program -===================== - -To compile DosBox and all libraries it depends on, we execute - -! $ make app/dosbox - -from within Genode's build directory. - -_We could also use the run script that we created in the previous step but_ -_that would build all components that are needed to actually run_ DosBox -_and at this point our goal is just to get_ DosBox _compiled._ - -At the first attempt, the compilation stopped because g++ could not find -the header file _sys/timeb.h_: - -! /src/genode/ports/contrib/dosbox-svn-3837/src/ints/bios.cpp:35:23: fatal error: -! sys/timeb.h: No such file or directory - -This header is part of the libc but until now there was no program, which -actually used this header. So nobody noticed that it was missing. This -can happen all time when porting a new application to Genode because most -functionality is enabled or rather added on demand. Someone who is -porting applications to Genode has to be aware of the fact that it might be -necessary to extend Genode functionality by enabling so far disabled -bits or implementing certain functionality needed by the -application that is ported. - -Since 'ftime(3)' is a deprecated function anyway we change the code of -DosBox to use 'gettimeofday(2)'. - -After this was fixed, we face another problem: - -! /src/genode/ports/contrib/dosbox-svn-3837/src/ints/int10_vesa.cpp:48:33: error: -! unable to find string literal operator ‘operator"" VERSION’ - -The fix is quite simple and the compile error was due to the fact -that Genode uses C++11 by now. It often happens that 3rd party code -is not well tested with a C++11 enabled compiler. In any case, a patch file -should be created which will be applied when preparing the port. - -Furthermore it would be reasonable to report the bug to the DosBox -developers so it can be fixed upstream. We can then get rid of our -local patch. - -The next show stoppers are missing symbols in Genode's SDL library port. -As it turns out, we never actually compiled and linked in the cdrom dummy -code which is provided by SDL. - - -Running the application -======================= - -DosBox was compiled successfully. Now it is time to execute the binary -on Genode. Hence we use the run script we created in step 5: - -! $ make run/dosbox - -This may take some time because all other components of the Genode OS -Framework that are needed for this scenario have to be built. - - -Debugging the application -========================= - -DosBox was successfully compiled but unfortunately it did not run. -To be honest that was expected and here the fun begins. - -At this point, there are several options to chose from. By running -Genode/Fiasco.OC within Qemu, we can use the kernel debugger (JDB) -to take a deeper look at what went wrong (e.g., backtraces of the -running processes, memory dumps of the faulted DosBox process etc.). -Doing this can be quite taxing but fortunately Genode runs on multiple -kernels and often problems on one kernel can be reproduced on another -kernel. For this reason, we choose Genode/Linux where we can use all -the normal debugging tools like 'gdb(1)', 'valgrind(1)' and so on. Luckily -for us, DosBox also fails to run on Genode/Linux. The debugging steps -are naturally dependent on the ported software. In the case of DosBox, -the remaining stumbling blocks were a few places where DosBox assumed -Linux as a host platform. - -For the sake of completeness here is a list of all files that were created by -porting DosBox to Genode: - -! ports/ports/dosbox.hash -! ports/ports/dosbox.port -! ports/run/dosbox.run -! ports/src/app/dosbox/config.h -! ports/src/app/dosbox/patches/bios.patch -! ports/src/app/dosbox/patches/int10_vesa.patch -! ports/src/app/dosbox/target.mk -! ports/src/app/dosbox/x86_32/size_defs.h -! ports/src/app/dosbox/x86_64/size_defs.h - -[image dosbox] - DosBox ported to Genode - -Finally, after having tested that both the preparation-step and the -build of DosBox work as expected, it is time to -finalize the fingerprint stored in the _/ports/ports/dosbox.hash_ -file. This can be done by copying the content of the -_/contrib/dosbox-dummy/dosbox.hash file_. -Alternatively, you may invoke the _tool/ports/update_hash_ tool with the -port name "dosbox" as argument. The next time, you -invoke the prepare_port tool, do not specify the 'CHECK_HASH=no' argument. -So the fingerprint check will validate that the _dosbox.hash_ file -corresponds to your _dosbox.port_ file. From now on, the -_/contrib/dosbox-dummy_ directory will no longer be used because -the _dosbox.hash_ file points to the port directory named after the real -fingerprint. - - -Native Genode port of a library -############################### - -Porting a library to be used natively on Genode is similar to porting -an application to run natively on Genode. The source codes have to be -obtained and, if needed, patched to run on Genode. -As an example on how to port a library to natively run on Genode, we -will describe the porting of SDL_net in more detail. Ported libraries -are placed in the _libports_ repository of Genode. But this is just a -convention. Feel free to host your library port in a custom repository -of your's. - - -Checking requirements/dependencies -================================== - -We will proceed as we did when we ported DosBox to run natively on Genode. -First we build SDL_net on Linux to obtain a log file of the whole build -process: - -! $ wget http://www.libsdl.org/projects/SDL_net/release/SDL_net-1.2.8.tar.gz -! $ tar xvzf SDL_net-1.2.8.tar.gz -! $ cd SDL_net-1.2.8 -! $ ./configure -! $ make > build.log 2>&1 - - -Creating the port file -====================== - -We start by creating _/libports/ports/sdl_net.port: - -! LICENSE := BSD -! VERSION := 1.2.8 -! DOWNLOADS := sdl_net.archive -! -! URL(sdl_net) := http://www.libsdl.org/projects/SDL_net/release/SDL_net-$(VERSION).tar.gz -! SHA(sdl_net) := fd393059fef8d9925dc20662baa3b25e02b8405d -! DIR(sdl_net) := src/lib/sdl_net -! -! PATCHES := src/lib/sdl_net/SDLnet.patch src/lib/sdl_net/SDL_net.h.patch - -In addition to the URL the SHA1 checksum of the SDL_net archive needs to -specified because _tool/prepare_port_ validates the downloaded archive -by using this hash. - -Applications that want to use SDL_net have to include the 'SDL_net.h' header -file. Hence it is necessary to make this file visible to applications. This is -done by populating the _/contrib/sdl-/include_ directory: - -! DIRS := include/SDL -! DIR_CONTENT(include/SDL) := src/lib/sdl_net/SDL_net.h - -For now, we also use a dummy hash in the _sdl_net.hash_ file like it was done -while porting DosBox. We will replace the dummy hash with the proper one at -the end. - - -Creating the build Makefile -=========================== - -We create the build rules in _libports/lib/mk/sdl_net.mk_: - -! SDL_NET_DIR := $(call select_from_ports,sdl_net)/src/lib/sdl_net -! -! SRC_C = $(notdir $(wildcard $(SDL_NET_DIR)/SDLnet*.c)) -! -! vpath %.c $(SDL_NET_DIR) -! -! INC_DIR += $(SDL_NET_DIR) -! -! LIBS += libc sdl - -'SDL_net' should be used as shared library. To achieve this, we -have to add the following statement to the 'mk' file: - -! SHARED_LIB = yes - -_If we omit this statement, Genode's build system will automatically_ -_build SDL_net as a static library called_ 'sdl_net.lib.a' _that_ -_is linked directly into the application._ - -It is reasonable to create a dummy application that uses the -library because it is only possible to build libraries automatically -as a dependency of an application. - -Therefore we create -_libports/src/test/libports/sdl_net/target.mk_ with the following content: - -! TARGET = test-sdl_net -! LIBS = libc sdl_net -! SRC_CC = main.cc - -! vpath main.cc $(PRG_DIR)/.. - -At this point we also create _lib/import/import-sdl_net.mk_ -with the following content: - -! SDL_NET_PORT_DIR := $(call select_from_ports,sdl_net) -! INC_DIR += $(SDL_NET_PORT_DIR)/include $(SDL_NET_PORT_DIR)/include/SDL - -Each port that depends on SDL_net and has added it to its LIBS variable -will automatically include the _import-sdl_net.mk_ file and therefore -will use the specified include directory to find the _SDL_net.h_ header. - - -Compiling the library -===================== - -We compile the SDL_net library as a side effect of building our dummy test -program by executing - -! $ make test/libports/sdl_net - -All source files are compiled fine but unfortunately the linking of the -library does not succeed: - -! /src/genodebuild/foc_x86_32/var/libcache/sdl_net/sdl_net.lib.so: -! undefined reference to `gethostbyaddr' - -The symbol 'gethostbyaddr' is missing, which is often a clear sign -of a missing dependency. In this case however 'gethostbyaddr(3)' is -missing because this function does not exist in Genode's libc _(*)_. -But 'getaddrinfo(3)' exists. We are now facing the choice of implementing -'gethostbyaddr(3)' or changing the code of SDL_net to use 'getaddrinfo(3)'. -Porting applications or libraries to Genode always may involve this kind of -choice. Which way is the best has to be decided by closely examining the -matter at hand. Sometimes it is better to implement the missing functions -and sometimes it is more beneficial to change the contributed code. -In this case, we opt for changing SDL_net because the former function is -obsolete anyway and implementing 'gethostbyaddr(3)' involves changes to -several libraries in Genode, namely libc and the network related -libc plugin. Although we have to keep in mind that it is likely to encounter -another application or library that also uses this function in the future. - -With this change in place, SDL_net compiles fine. - -_(*) Actually this function is implemented in the Genode's_ libc _but is_ -_only available by using libc_resolv which we did not do for the sake of_ -_this example._ - - -Testing the library -=================== - -The freshly ported library is best tested with the application, which was the -reason the library was ported in the first place, since it is unlikely that -we port a library just for fun and no profit. Therefore, it is not necessary to -write a run script for a library alone. - -For the records, here is a list of all files that were created by -porting SDL_net to Genode: - -! libports/lib/mk/sdl_net.mk -! libports/lib/mk/import/import-sdl_net.mk -! libports/ports/sdl_net.hash -! libports/ports/sdl_net.port -! libports/src/lib/sdl_net/SDLnet.patch -! libports/test/libports/sdl_net/target.mk - - -Porting an application to Genode's Noux runtime -############################################### - -Porting an application to Genode's Noux runtime is basically the same as -porting a program to natively run on Genode. The source code has to be -prepared and, if needed, patched to run in Noux. However in contrast to -this, there are Noux build rules (_ports/mk/noux.mk_) that enable us to use -the original build-tool if it is based upon Autotools. Building the -application is done within a cross-compile environment. In this environment -all needed variables like 'CC', 'LD', 'CFLAGS' and so on are set to their -proper values. In addition to these precautions, using _noux.mk_ simplifies certain things. -The system-call handling/functions is/are implemented in the libc plugin -_libc_noux_ (the source code is found in _ports/src/lib/libc_noux_). All -applications running in Noux have to be linked against this library which is -done implicitly by using the build rules of Noux. - -As an example on how to port an application to Genode's Noux runtime, we -will describe the porting of GNU's 'tar' tool in more detail. A ported -application is normally referred to as a Noux package. - -Checking requirements/dependencies -================================== - -As usual, we first build GNU tar on Linux/x86 and capture the build -process: - -! $ wget http://ftp.gnu.org/gnu/tar/tar-1.27.tar.xz -! $ tar xJf tar-1.27.tar.xz -! $ cd tar-1.27 -! $ ./configure -! $ make > build.log 2>&1 - - -Creating the port file -====================== - -We start by creating the port Makefile _ports/ports/tar.mk_: - -! LICENSE := GPLv3 -! VERSION := 1.27 -! DOWNLOADS := tar.archive -! -! URL(tar) := http://ftp.gnu.org/gnu/tar/tar-$(VERSION).tar.xz -! SHA(tar) := 790cf784589a9fcc1ced33517e71051e3642642f -! SIG(tar) := ${URL(tar)}.sig -! KEY(tar) := GNU -! DIR(tar) := src/noux-pkg/tar - -_As of version 14.05, Genode does not check the signature specified via_ -_the SIG and KEY declaration but relies the SHA checksum only. However,_ -_as signature checks are planned in the future, we use to include the_ -_respective declarations if signature files are available._ - -While porting GNU tar we will use a dummy hash as well. - - -Creating the build rule -======================= - -Build rules for Noux packages are located in _/ports/src/noux-pkgs_. - -The _tar/target.mk_ corresponding to GNU tar looks like this: - -! CONFIGURE_ARGS = --bindir=/bin \ -! --libexecdir=/libexec -! -! include $(REP_DIR)/mk/noux.mk - -The variable 'CONFIGURE_ARGS' contains the options that are -passed on to Autoconf's configure script. The Noux specific build -rules in _noux.mk_ always have to be included last. - -The build rules for GNU tar are quite short and therefore at the end -of this chapter we take a look at a much more extensive example. - - -Creating a run script -===================== - -Creating a run script to test Noux packages is the same as it is -with running natively ported applications. Therefore we will only focus -on the Noux-specific parts of the run script and omit the rest. - -First, we add the desired Noux packages to 'build_components': - -! set noux_pkgs "bash coreutils tar" -! -! foreach pkg $noux_pkgs { -! lappend_if [expr ![file exists bin/$pkg]] build_components noux-pkg/$pkg } -! -! build $build_components - -Since each Noux package is, like every other Genode binary, installed to the -_/bin_ directory, we create a tar archive of each package from -each directory: - -! foreach pkg $noux_pkgs { -! exec tar cfv bin/$pkg.tar -h -C bin/$pkg . } - -_Using noux.mk makes sure that each package is always installed to_ -_/bin/._ - -Later on, we will use these tar archives to assemble the file system -hierarchy within Noux. - -Most applications ported to Noux want to read and write files. On that -matter, it is reasonable to provide a file-system service and the easiest -way to do this is to use the ram_fs server. This server provides a RAM-backed -file system, which is perfect for testing Noux applications. With -the help of the session label we can route multiple directories to the -file system in Noux: - -! append config { -! -! […] -! -! -! -! -! -! -! -! -! -! -! -! -! -! […] - -The file system Noux presents to the running applications is constructed -out of several stacked file systems. These file systems have to be -registered in the 'fstab' node in the configuration node of Noux: - -! -! -! -! } - -Each Noux package is added - -! foreach pkg $noux_pkgs { -! append config { -! " }} - -and the routes to the ram_fs file system are configured: - -! append config { -! -! -! -! -! -! -! -! -! } - -In this example we save the run script as _ports/run/noux_tar.run_. - - -Compiling the Noux package -========================== - -Now we can trigger the compilation of tar by executing - -! $ make VERBOSE= noux-pkg/tar - -_At least on the first compilation attempt, it is wise to unset_ 'VERBOSE' -_because it enables us to see the whole output of the_ 'configure' _process._ - -By now, Genode provides almost all libc header files that are used by -typical POSIX programs. In most cases, it is rather a matter of enabling -the right definitions and compilation flags. It might be worth to take a -look at FreeBSD's ports tree because Genode's libc is based upon the one -of FreeBSD 8.2.0 and if certain changes to the contributed code are needed, -they are normally already done in the ports tree. - -The script _noux_env.sh_ that is used to create the cross-compile -environment as well as the famous _config.log_ are found -in _/noux-pkg/_. - - -Running the Noux package -======================== - -We use the previously written run script to start the scenario, in which we -can execute and test the Noux package by issuing: - -! $ make run/noux_tar - -After the system has booted and Noux is running, we first create some test -files from within the running bash process: - -! bash-4.1$ mkdir /tmp/foo -! bash-4.1$ echo 'foobar' > /tmp/foo/bar - -Following this we try to create a ".tar" archive of the directory _/tmp/foo_ - -! bash-4.1$ cd /tmp -! bash-4.1$ tar cvf foo.tar foo/ -! tar: /tmp/foo: Cannot stat: Function not implemented -! tar: Exiting with failure status due to previous errors -! bash-4.1$ - -Well, this does not look too good but at least we have a useful error message -that leads (hopefully) us into the right direction. - - -Debugging an application that uses the Noux runtime -=================================================== - -Since the Noux service is basically the kernel part of our POSIX runtime -environment, we can ask Noux to show us the system calls executed by tar. -We change its configuration in the run script to trace all system calls: - -! […] -! -! -! […] - -We start the runscript again, create the test files and try to create a -".tar" archive. It still fails but now we have a trace of all system calls -and know at least what is going in Noux itself: - -! […] -! [init -> noux] PID 0 -> SYSCALL FORK -! [init -> noux] PID 0 -> SYSCALL WAIT4 -! [init -> noux] PID 5 -> SYSCALL STAT -! [init -> noux] PID 5 -> SYSCALL EXECVE -! [init -> noux] PID 5 -> SYSCALL STAT -! [init -> noux] PID 5 -> SYSCALL GETTIMEOFDAY -! [init -> noux] PID 5 -> SYSCALL STAT -! [init -> noux] PID 5 -> SYSCALL OPEN -! [init -> noux] PID 5 -> SYSCALL FTRUNCATE -! [init -> noux] PID 5 -> SYSCALL FSTAT -! [init -> noux] PID 5 -> SYSCALL GETTIMEOFDAY -! [init -> noux] PID 5 -> SYSCALL FCNTL -! [init -> noux] PID 5 -> SYSCALL WRITE -! [init -> noux -> /bin/tar] DUMMY fstatat(): fstatat called, not implemented -! [init -> noux] PID 5 -> SYSCALL FCNTL -! [init -> noux] PID 5 -> SYSCALL FCNTL -! [init -> noux] PID 5 -> SYSCALL WRITE -! [init -> noux] PID 5 -> SYSCALL FCNTL -! [init -> noux] PID 5 -> SYSCALL WRITE -! [init -> noux] PID 5 -> SYSCALL GETTIMEOFDAY -! [init -> noux] PID 5 -> SYSCALL CLOSE -! [init -> noux] PID 5 -> SYSCALL FCNTL -! [init -> noux] PID 5 -> SYSCALL WRITE -! [init -> noux] PID 5 -> SYSCALL CLOSE -! [init -> noux] child /bin/tar exited with exit value 2 -! […] - -_The trace log was shortened to only contain the important information._ - -We now see at which point something went wrong. To be honest, we see the -'DUMMY' message even without enabling the tracing of system calls. But -there are situations where a application is actually stuck in a (blocking) -system call and it is difficult to see in which. - -Anyhow, 'fstatat' is not properly implemented. At this point, we either have -to add this function to the Genode's libc or rather add it to libc_noux. -If we add it to the libc, not only applications running in Noux will -benefit but all applications using the libc. Implementing it in -libc_noux is the preferred way if there are special circumstances because -we have to treat the function differently when used in Noux (e.g. 'fork'). - -For the sake of completeness here is a list of all files that were created by -porting GNU tar to Genode's Noux runtime: - -! ports/ports/tar.hash -! ports/ports/tar.port -! ports/run/noux_tar.run -! ports/src/noux-pkg/tar/target.mk - - -Extensive build rules example -============================= - -The build rules for OpenSSH are much more extensive than the ones in -the previous example. Let us take a quick look at those build rules to -get a better understanding of possible challenges one may encounter while -porting a program to Noux: - -! # This prefix 'magic' is needed because OpenSSH uses $exec_prefix -! # while compiling (e.g. -DSSH_PATH) and in the end the $prefix and -! # $exec_prefix path differ. -! -! CONFIGURE_ARGS += --disable-ip6 \ -! […] -! --exec-prefix= \ -! --bindir=/bin \ -! --sbindir=/bin \ -! --libexecdir=/bin - -In addition to the normal configure options, we have to also define the -path prefixes. The OpenSSH build system embeds certain paths in the -ssh binary, which need to be changed for Noux. - -! INSTALL_TARGET = install - -Normally the Noux build rules (_noux.mk_) execute 'make install-strip' to -explicitly install binaries that are stripped of their debug symbols. The -generated Makefile of OpenSSH does not use this target. It automatically -strips the binaries when executing 'make install'. Therefore, we set the -variable 'INSTALL_TARGET' to override the default behaviour of the -Noux build rules. - -! LIBS += libcrypto libssl zlib libc_resolv - -As OpenSSH depends on several libraries, we need to include these in the -build Makefile. These libraries are runtime dependencies and need to be -present when running OpenSSH in Noux. - -Sometimes it is needed to patch the original build system. One way to do -this is by applying a patch while preparing the source code. The other -way is to do it before building the Noux package: - -! noux_built.tag: Makefile Makefile_patch -! -! Makefile_patch: Makefile -! @# -! @# Our $(LDFLAGS) contain options which are usable by gcc(1) -! @# only. So instead of using ld(1) to link the binary, we have -! @# to use gcc(1). -! @# -! $(VERBOSE)sed -i 's|^LD=.*|LD=$(CC)|' Makefile -! @# -! @# We do not want to generate host-keys because we are crosscompiling -! @# and we can not run Genode binaries on the build system. -! @# -! $(VERBOSE)sed -i 's|^install:.*||' Makefile -! $(VERBOSE)sed -i 's|^install-nokeys:|install:|' Makefile -! @# -! @# The path of ssh(1) is hardcoded to $(bindir)/ssh which in our -! @# case is insufficient. -! @# -! $(VERBOSE)sed -i 's|^SSH_PROGRAM=.*|SSH_PROGRAM=/bin/ssh|' Makefile - -The target _noux_built.tag_ is a special target defined by the Noux build -rules. It will be used by the build rules when building the Noux package. -We add the 'Makefile_patch' target as a dependency to it. So after configure -is executed, the generated Makefile will be patched. - -Autoconf's configure script checks if all requirements are fulfilled and -therefore, tests if all required libraries are installed on the host system. -This is done by linking a small test program against the particular library. -Since these libraries are only build-time dependencies, we fool the configure -script by providing dummy libraries: - -! # -! # Make the zlib linking test succeed -! # -! Makefile: dummy_libs -! -! LDFLAGS += -L$(PWD) -! -! dummy_libs: libz.a libcrypto.a libssl.a -! -! libcrypto.a: -! $(VERBOSE)$(AR) -rc $@ -! libssl.a: -! $(VERBOSE)$(AR) -rc $@ -! libz.a: -! $(VERBOSE)$(AR) -rc $@ - - -Porting devices drivers -####################### - -Even though Genode encourages writing native device drivers, this task sometimes -becomes infeasible. Especially if there is no documentation available for a -certain device or if there are not enough programming resources at hand to -implement a fully fledged driver. Examples of ported drivers can be found in -the 'dde_linux', 'dde_bsd', and 'dde_ipxe' repositories. - -In this chapter we will exemplary discuss how to port a Linux driver for an ARM -based SoC to Genode. The goal is to execute driver code in user land directly on -Genode while making the driver believe it is running within the Linux kernel. -Traditionally there have been two approaches to reach this goal in Genode. In -the past, Genode provided a Linux environment, called 'dde_linux26', with the -purpose to offer just enough infrastructure to easily port drivers. However, -after adding more drivers it became clear that this repository grew extensively, -making it hard to maintain. Also updating the environment to support newer -Linux-kernel versions became a huge effort which let the repository to be neglected -over time. - -Therefore we choose the path to write a customized environment for each driver, -which provides a specially tailored infrastructure. We found that the -support code usually is not larger than a couple of thousand lines of code, -while upgrading to newer driver versions, as we did with the USB drivers, is -feasible. - - -Basic driver structure -====================== - -The first step in porting a driver is to identify the driver code that has to be -ported. Once the code is located, we usually create a new Genode repository and -write a port file to download and extract the code. It is good practice to name -the port and the hash file like the new repository, e.g. _dde_linux.port_ if -the repository directory is called _/repos/dde_linux_. -Having the source code ready, there are three main tasks the environment must -implement. The first is the driver back end, which is responsible for raw device -access using Genode primitives, the actual environment that emulates Linux -function calls the driver code is using, and the front end, which exposes for -example some Genode-session interface (like NIC or block session) that client -applications can connect to. - - -Further preparations -==================== - -Having the code ready, the next step is to create an _*.mk_ file that actually -compiles the code. For a driver library _lib/mk/.mk_ has to be -created and for a stand-alone program _src//target.mk_ is created -within the repository. With the _*.mk_ file in place, we can now start the -actual compilation. Of course this will cause a whole lot of errors and -warnings. Most of the messages will deal with implicit declarations of functions -and unknown data types. What we have to do now is to go through each warning and -error message and either add the header file containing the desired function or -data type to the list of files that will be extracted to the _contrib_ directory -or create our own prototype or data definition. - -When creating our own prototypes, we put them in a file called _lx_emul.h_. To -actually get this file included in all driver files we use the following code in -the _*.mk_ file: - -! CC_C_OPT += -include $(INC_DIR)/lx_emul.h - -where 'INC_DIR' points to the include path of _lx_emul.h_. - -The hard part is to decide which of the two ways to go for a specific function -or data type, since adding header files also adds more dependencies and often -more errors and warnings. As a rule of thumb, try adding as few headers as -possible. - -The compiler will also complain about a lot of missing header files. Since we do -not want to create all these header files, we use a trick in our _*.mk_ file that -extracts all header file includes from the driver code and creates symbolic -links that correspond to the file name and links to _lx_emul.h_. You can put the -following code snippet in your _*.mk_ file which does the trick: - -!# -!# Determine the header files included by the contrib code. For each -!# of these header files we create a symlink to _lx_emul.h_. -!# -!GEN_INCLUDES := $(shell grep -rh "^\#include .*\/" $(DRIVER_CONTRIB_DIR) |\ -! sed "s/^\#include [^<\"]*[<\"]\([^>\"]*\)[>\"].*/\1/" | \ -! sort | uniq) -! -!# -!# Filter out original Linux headers that exist in the contrib directory -!# -!NO_GEN_INCLUDES := $(shell cd $(DRIVER_CONTRIB_DIR); find -name "*.h" | sed "s/.\///" | \ -! sed "s/.*include\///") -!GEN_INCLUDES := $(filter-out $(NO_GEN_INCLUDES),$(GEN_INCLUDES)) -! -!# -!# Put Linux headers in 'GEN_INC' dir, since some include use "../../" paths use -!# three level include hierarchy -!# -!GEN_INC := $(shell pwd)/include/include/include -! -!$(shell mkdir -p $(GEN_INC)) -! -!GEN_INCLUDES := $(addprefix $(GEN_INC)/,$(GEN_INCLUDES)) -!INC_DIR += $(GEN_INC) -! -!# -!# Make sure to create the header symlinks prior building -!# -!$(SRC_C:.c=.o) $(SRC_CC:.cc=.o): $(GEN_INCLUDES) -! -!$(GEN_INCLUDES): -! $(VERBOSE)mkdir -p $(dir $@) -! $(VERBOSE)ln -s $(LX_INC_DIR)/lx_emul.h $@ - -Make sure 'LX_INC_DIR' is the directory containing the _lx_emul.h_ file. Note -that 'GEN_INC' is added to your 'INC_DIR' variable. - -The 'DRIVER_CONTRIB_DIR' variable is defined by calling the _select_from_port_ -function at the beginning of a Makefile or a include file, which is used by -all other Makefiles: - -! DRIVER_CONTRIB_DIR := $(call select_from_ports,driver_repo)/src/lib/driver_repo - -The process of function definition and type declaration continues until the code -compiles. This process can be quite tiresome. When the driver code finally compiles, the -next stage is linking. This will of course lead to another whole set of errors -that complain about undefined references. To actually obtain a linked binary we -create a _dummies.cc_ file. To ease things up we suggest to create a macro called -'DUMMY' and implement functions as in the example below: - -! /* -! * Do not include 'lx_emul.h', since the implementation will most likely clash -! * with the prototype -! */ -! -!#define DUMMY(retval, name) \ -! DUMMY name(void) { \ -! PDBG( #name " called (from %p) not implemented", __builtin_return_address(0)); \ -! return retval; \ -!} -! -! DUMMY(-1, kmalloc) -! DUMMY(-1, memcpy) -! ... - -Create a 'DUMMY' for each undefined reference until the binary links. We now -have a linked binary with a dummy environment. - - -Debugging -========= - -From here on, we will actually start executing code, but before we do that, let us -have a look at the debugging options for device drivers. Since drivers have to -be tested on the target platform, there are not as many debugging options -available as for higher level applications, like running applications on the -Linux version of Genode while using GDB for debugging. Having these -restrictions, debugging is almost completely performed over the serial line and -on rare occasions with an hardware debugger using JTAG. - -For basic Linux driver debugging it is useful to implement the 'printk' -function (use 'dde_kit_printf' or something similar) first. This way, the driver -code can output something and additions for debugging can be made. The -'__builtin_return_address' function is also useful in order to determine where a -specific function was called from. 'printk' may become a problem with devices -that require certain time constrains because serial line output is very slow. This is -why we port most drivers by running them on top of the Fiasco.OC version of -Genode. There you can take advantage of Fiasco's debugger (JDB) and trace buffer -facility. - -The trace buffer can be used to log data and is much faster than 'printk' over -serial line. Please inspect the 'ktrace.h' file (at -_base-foc/contrib/l4/pkg/l4sys/include/ARCH-*/ktrace.h_) -that describes the complete interface. A very handy function there is - -!fiasco_tbuf_log_3val("My message", variable1, variable2, variable3); - -which stores a message and three variables in the trace buffer. The trace buffer -can be inspected from within JDB by pressing 'T'. - -JDB can be accessed at any time by pressing the 'ESC' key. It can be used to -inspect the state of all running threads and address spaces on the system. There -is no recent JDB documentation available, but - -:Fiasco kernel debugger manual: - - [http://os.inf.tu-dresden.de/fiasco/doc/jdb.pdf] - -should be a good starting point. It is also possible to enter the debugger at -any time from your program calling the 'enter_kdebug("My breakpoint")' function -from within your code. The complete JDB interface can be found in -_base-foc/contrib/l4/pkg/l4sys/include/ARCH-*/kdebug.h_. - -Note that the backtrace ('bt') command does not work out of the box on ARM -platforms. We have a small patch for that in our Fiasco.OC development branch -located at GitHub: [http://github.com/ssumpf/foc/tree/dev] - - -The back end -============ - -To ease up the porting of drivers and interfacing Genode from C code, Genode offers a -library called DDE kit. DDE kit provides access to common functions required -by drivers like device memory, virtual memory with physical-address lookup, -interrupt handling, timers, etc. Please inspect _os/include/dde_kit_ to see the -complete interface description. You can also use 'grep -r dde_kit_ *' to see -usage of the interface in Genode. - -As an example for using DDE kit we implement the 'kmalloc' call: - -!void *kmalloc(size_t size, gfp_t flags) -!{ -! return dde_kit_simple_malloc(size); -!} - -It is also possible to directly use Genode primitives from C++ files, the -functions only have to be declared as 'extern "C"' so they can be called from C -code. - - -The environment -=============== - -Having a dummy environment we may now begin to actually execute driver code. - -Driver initialization -~~~~~~~~~~~~~~~~~~~~~ - -Most Linux drivers will have an initialization routine to register itself within -the Linux kernel and do other initializations if necessary. In order to be -initialized, the driver will register a function using the 'module_init' call. -This registered function must be called before the driver is actually used. To -be able to call the registered function from Genode, we define the 'module_init' -macro in _lx_emul.h_ as follows: - -! #define module_init(fn) void module_##fn(void) { fn(); } - -when a driver now registers a function like - -! module_init(ehci_hcd_init); - -we would have to call - -! module_ehci_hcd_init(); - -during driver startup. Having implemented the above, it is now time to start our -ported driver on the target platform and check if the initialization function is -successful. Any important dummy functions that are called must be implemented -now. A dummy function that does not do device related things, like Linux book -keeping, may not be implemented. Sometimes Linux checks the return values of -functions we might not want to implement, in this case it is sufficient to simply -adjust the return value of the affected function. - -Device probing -~~~~~~~~~~~~~~ -Having the driver initialized, we will give the driver access to the device -resources. This is performed in two steps. In the case of ARM SoC's we have to -check in which state the boot loader (usually U-Boot) left the device. Sometimes -devices are already setup by the boot loader and only a simple device reset is -necessary to proceed. If the boot loader did not touch the device, we most -likely have to check and setup all the necessary clocks on the platform and may -have to perform other low level initializations like PHY setup. - -If the device is successfully (low level) initialized, we can hand it over to -the driver by calling the 'probe' function of the driver. For ARM platforms the -'probe' function takes a 'struct platform_device' as an argument and all -important fields, like device resources and interrupt numbers, should be set to -the correct values before calling 'probe'. During 'probe' the driver will most -likely map and access device memory, request interrupts, and reset the device. -All dummy functions that are related to these tasks should be implemented or -ported at this point. - -When 'probe' returns successful, you may either test other driver functions by -hand or start building the front-end. - - -The front end -============= - -An important design question is how the front end is attached to the driver. In -some cases the front end may not use the driver directly, but other Linux -subsystems that are ported or emulated by the environment. For example, the USB -storage driver implements parts of the SCSI subsystem, which in turn is used -by the front end. The whole decision depends on the kind of driver that is -ported and on how much additional infrastructure is needed to actually make use -of the data. Again an USB example: For USB HID, we needed to port the USB controller -driver, the hub driver, the USB HID driver, and the generic HID driver in order -to retrieve keyboard and mouse events from the HID driver. - -The last step in porting a device driver is to make it accessible to other -Genode applications. Typically this is achieved by implementing one of Genode's -session interfaces, like a NIC session for network adapters or a block session for -block devices. You may also define your own session interfaces. The -implementation of the session interface will most likely trigger driver calls, -so you have to have to keep an eye on the dummy functions. Also make sure that calls to the -driver actually do what they are supposed to, for example, some wrong return value -of a dummy function may cause a function to return without performing any work. - - -Notes on synchronization -======================== - -After some experiences with Linux drivers and multi-threading, we lately -choose to have all Linux driver code executed by a single thread only. This way no Linux -synchronization primitives have to be implemented and we simply don't have to -worry about subtle pre- and postconditions of many functions (like "this -function has to be called with lock 'x' being held"). - -Unfortunately we cannot get rid of all threads within a device-driver server, -there is at least one waiting for interrupts and one for the entry point that -waits for client session requests. In order to synchronize these threads, we use -Genode's signalling framework. So when, for example, the IRQ thread receives an -interrupt it will send a signal. The Linux driver thread will at certain points -wait for these signals (e.g., functions like 'schedule_timeout' or -'wait_for_completion') and execute the right code depending on the kind of -signal delivered or firmly speaking the signal context. For this to work, we use -a class called 'Signal_dispatcher' (_base/include/base/signal.h_) which inherits -from 'Signal_context'. More than one dispatcher can be bound to a signal -receiver, while each dispatcher might do different work, like calling the -Linux interrupt handler in the IRQ example. - - diff --git a/repos/os/doc/init.txt b/repos/os/doc/init.txt deleted file mode 100644 index 532a46071c..0000000000 --- a/repos/os/doc/init.txt +++ /dev/null @@ -1,314 +0,0 @@ - - ======================================== - Configuring the init component of Genode - ======================================== - - Norman Feske - - -The Genode architecture facilitates the flexible construction of complex usage -scenarios out of Genode's components used as generic building blocks. Thanks -to the strictly hierarchic and, at the same time, recursive structure of -Genode, a parent has full control over the way, its children interact with each -other and with the parent. The init component plays a special role in that -picture. At boot time, it gets started by core, gets assigned all physical -resources, and controls the execution of all further components, which can -be further instances of init. Init's policy is driven by a configuration file, -which declares a number of children, their relationships, and resource -assignments. This document describes the configuration mechansism to steer the -policy of the init component. The configuration is described in a single XML file -called 'config' supplied via core's ROM service. - - -Configuration -############# - -At the parent-child interface, there are two operations that are subject to -policy decisions of the parent, the child announcing a service and the -child requesting a service. If a child announces a service, the parent is up -to decide if and how to make this service accessible to its other children. -When a child requests a service, the parent may deny the session request, -delegate the request to its own parent, implement the requested service -locally, or open a session at one of its other children. This decision may -depend on the requested service or session-construction arguments provided -by the child. Apart from assigning resources to children, the central -element of the policy implemented in the parent is a set of rules to -route session requests. Therefore, init's configuration concept is laid out -around components and the routing of session requests. The concept is best -illustrated by an example (the following config file can be used on Linux): - -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! - -First, there is the declaration of services provided by the parent of the -configured init instance. In this case, we declare that the parent provides a -LOG service. For each child to start, there is a '' -node describing resource assignments, declaring services provided by the child, -and holding a routing table for session requests originating from the child. -The first child is called "timer" and implements the "Timer" service. The -second component called "test-timer" is a client of the timer service. In its -routing table, we see that requests for "Timer" sessions should be routed to -the "timer" child whereas requests for "LOG" sessions should be delegated to -init's parent. Per-child service routing rules provide a flexible way to -express arbitrary client-server relationships. For example, service requests -may be transparently mediated through special policy components acting upon -session-construction arguments. There might be multiple children implementing -the same service, each addressed by different routing tables. If there is no -valid route to a requested service, the service is denied. In the example -above, the routing tables act effectively as a whitelist of services the child -is allowed to use. - -In practice, usage scenarios become more complex than the basic example, -increasing the size of routing tables. Furthermore, in many practical cases, -multiple children may use the same set of services, and require duplicated -routing tables within the configuration. In particular during development, the -elaborative specification of routing tables tend to become an inconvenience. -To alleviate this problem, there are two mechanisms, wildcards and a default -route. Instead of specifying a list of single service routes targeting the same -destination, the wildcard '' becomes handy. For example, instead -of specifying -! -! -! -! -! -! -the following shortcut can be used: -! -! -! -The latter version is not as strict as the first one because it permits the -child to create sessions at the parent, which were not whitelisted in the -elaborative version. Therefore, the use of wildcards is discouraged for -configuring untrusted components. Wildcards and explicit routes may be combined -as illustrated by the following example: -! -! -! -! -The routing table is processed starting with the first entry. If the route -matches the service request, it is taken, otherwise the remaining -routing-table entries are visited. This way, the explicit service route of -"LOG" sessions to "nitlog" shadows the LOG service provided by the parent. - -To emulate the traditional init policy, which allowed a child to use services -provided by arbitrary other children, there is a further wildcard called -''. Using this wildcard, such a policy can be expressed as follows: -! -! -! -! -This rule would delegate all session requests referring to one of the parent's -services to the parent. If no parent service matches the session request, the -request is routed to any child providing the service. The rule can be further -reduced to: -! -! -! -Potential ambiguities caused by multiple children providing the same service -are detected automatically. In this case, the ambiguity must be resolved using -an explicit route preceding the wildcards. - -To reduce the need to specify the same routing table for many children -in one configuration, there is a '' mechanism. The default -route is declared within the '' node and used for each '' -entry with no '' node. In particular during development, the default -route becomes handy to keep the configuration tidy and neat. - -The combination of explicit routes and wildcards is designed to scale well from -being convenient to use during development towards being highly secure at -deployment time. If only explicit rules are present in the configuration, the -permitted relationships between all components are explicitly defined and can be -easily verified. Note however that the degree those rules are enforced at the -kernel-interface level depends on the used base platform. - - -Advanced features -################# - -In addition to the service routing facility described in the previous section, -the following features are worth noting: - - -Resource quota saturation -========================= - -If a specified resource (i.e., RAM quota) exceeds the available resources. -The available resources are assigned completely to the child. This makes -it possible to assign all remaining resources to the last child by -simply specifying an overly large quantum. - - -Multiple instantiation of a single ELF binary -============================================= - -Each '' node requires a unique 'name' attribute. By default, the -value of this attribute is used as file name for obtaining the ELF -binary at the parent's ROM service. If multiple instances of the same -ELF binary are needed, the binary name can be explicitly specified -using a '' sub node of the '' node: -! -This way, the unique child names can be chosen independently from the -binary file name. - - -Nested configuration -==================== - -Each '' node can host a '' sub node. The content of this sub -node is provided to the child when a ROM session for the file name "config" is -requested. Thereby, arbitrary configuration parameters can be passed to the -child. For example, the following configuration starts 'timer-test' within an -init instance within another init instance. To show the flexibility of init's -service routing facility, the "Timer" session of the second-level 'timer-test' -child is routed to the timer service started at the first-level init instance. -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -! -The services ROM, CPU, RM, and PD are required by the second-level -init instance to create the timer-test component. - -As illustrated by this example, the use of the nested configuration feature -enables the construction of arbitrarily complex component trees via a single -configuration file. - - -Assigning subsystems to CPUs -============================ - -The assignment of subsystems to CPU nodes consists of two parts, the -definition of the affinity space dimensions as used for the init component, and -the association sub systems with affinity locations (relative to the affinity -space). The affinity space is configured as a sub node of the config node. For -example, the following declaration describes an affinity space of 4x2: - -! -! ... -! -! ... -! - -Subsystems can be constrained to parts of the affinity space using the -'' sub node of a '' entry: - -! -! ... -! -! -! ... -! -! ... -! - - -Priority support -================ - -The number of CPU priorities to be distinguished by init can be specified with -'prio_levels' attribute of the '' node. The value must be a power of -two. By default, no priorities are used. To assign a priority to a child -component, a priority value can be specified as 'priority' attribute of the -corresponding '' node. Valid priority values lie in the range of --prio_levels + 1 (maximum priority degradation) to 0 (no priority degradation). - - -Verbosity -========= - -To ease the debugging, init can be directed to print various status information -as LOG output. To enable the verbose mode, assign the value "yes" to the -'verbose' attribute of the '' node. - - -Propagation of exit events -========================== - -A component can notify its parent about its graceful exit via the exit RPC -function of the parent interface. By default, init responds to such a -notification from one of its children by merely printing a log message but -ignores it otherwise. However, there are scenarios where the exit of a -particular child should result in the exit of the entire init component. To -propagate the exit of a child to the parent of init, start nodes can host the -optional sub node '' with the attribute 'propagate' set to "yes". - -! -! -! -! ... -! -! - -The exit value specified by the exiting child is forwarded to init's parent. - - -Using the configuration concept -############################### - -To get acquainted with the configuration format, there are two example -configuration files located at 'os/src/init/', which are both ready-to-use with -the Linux version of Genode. Both configurations produce the same scenario but -they differ in the way policy is expressed. The 'explicit_routing' -configuration is an example for the elaborative specification of all service -routes. All service requests not explicitly specified are denied. So this -policy is a whitelist enforcing mandatory access control on each session -request. The example illustrates well that such a elaborative specification is -possible in an intuitive manner. However, it is pretty comprehensive. In cases -where the elaborative specification of service routing is not fundamentally -important, in particular during development, the use of wildcards can help to -simplify the configuration. The 'wildcard' example demonstrates the use of a -default route for session-request resolution and wildcards. This variant is -less strict about which child uses which service. For development, its -simplicity is beneficial but for deployment, we recommend to remove wildcards -('', '', and '') altogether. The -absence of such wildcards is easy to check automatically to ensure that service -routes are explicitly whitelisted. - -Further configuration examples can be found in the 'os/config/' directory.