Instead of simply encrypting the private key with AES-256 when storing it to
the 'encrypted_private_key' file, wrap it using the AES-key-wrap algorithm
described in RFC 3394 "Advanced Encryption Standard (AES) Key Wrap Algorithm".
This is more secure and enables us to directly check whether the passphrase
entered by the user was correct or not.
Ref #4032
As the file formerly named 'secured_superblock' actually contains the hash of
the superblock that was secured, it was renamed 'superblock_hash'.
Ref #4032
As the file formerly named 'keyfile' actually contains the encrypted private
key of the Trust Anchor, it was renamed 'encrypted_private_key'.
Ref #4032
By now, the symmetric keys were only XOR'ed with the private key as placeholder
for a real encryption. Now they are encrypted using AES256 with the TA's
private key as key
Ref #4032.
A private key of 256 bits is generated pseudo-randomly using the jitterentropy
VFS plugin on initialization. The private key is stored in the key file
encrypted via AES256 using the SHA256 hash of the users passphrase. When
unlocking the CBE device, the encrypted private key is read from the key file
and decrypted with the hash of the users passphrase.
Ref #4032
Instead of using the user passphrase directly, use its SHA256 hash calculated
using libcrypto. The passphrase hash is still stored in the key file to be
used as base for the very primitive way of generating the private key.
Ref #4032
Closing the keyfile handle after a write operation wasn't synchronised to the
actual end of the write operation.
Issuing a write operation at the back end returns successfull as soon as the
back end has acknowledged that it will execute the operation. However, the
actual writing of the data might still be in progress at this point. But the
plugin used to close the file handle and declare the operation finished at this
point which led to warnings about acks on unknown file handles and leaking
resources. Now, the plugin issues a sync operation directly after the write
operation and waits for the sync to complete. This ensures that the plugin
doesn't declare the operation finished too early.
Ref #4032
The unlocking operation in the trust anchor was broken wich caused bad keys in
the CBE. This rewrites the whole operation to work as desired. Note that this
doesn't make it more safe! The private key is still almost the same as the
passphrase and stored plaintext.
Ref #4032
The plugin used to close file handles via the 'vfs_env.root_dir.close'.
However, this lead to resource leaks and apparently isn't the right way to
do it. Other VFS plugins do it by calling 'close' directly on the handle and
doing it in the trust anchor plugin also, fixes the leaks.
Ref #4032
Closing the hashfile handle after a write operation wasn't synchronised to the
actual end of the write operation.
Issuing a write operation at the back end returns successfull as soon as the
back end has acknowledged that it will execute the operation. However, the
actual writing of the data might still be in progress at this point. But the
plugin used to close the file handle and declare the operation finished at this
point which led to warnings about acks on unknown file handles and leaking
resources. Now, the plugin issues a sync operation directly after the write
operation and waits for the sync to complete. This ensures that the plugin
doesn't declare the operation finished too early.
Ref #4032
There were no means for issuing a Deinitialize request at the CBE using the
CBE VFS plugin. The new control/deinitialize file fixes this. When writing
"true" to the file, a Deinitialize request is submitted at the CBE. When
reading the file, the state of the operation is returned as a string of the
format "[current_state] last-result: [last_result]" where [current_state] can
be "idle" or "in-progress" and [last_result] can be "none", "success", or
"failed".
Ref #4032
When discarding a snapshot, the CBE VFS plugin didn't communicate the ID of
the snapshot to the CBE. Instead it set the ID argument to 0. Therefore the
operation never had any effect.
Ref #4032
The snapshots file system used to return the number of snapshots on
'num_dirent' when called for the root directory although it was expected to
return 1. This confused the tooling ontop of the VFS.
Ref #4032
Despite being readable, the files control/extend and control/rekey proclaimed
that they were not when asked. This caused the fs_query tool to not report the
content of the files although it could have.
Ref #4032
Stat calls on the control/extend and control/rekey files returned a bogus file
size that led to an error in the VFS File_content tool. The tool complained
that the size of the file determined while reading the content differs from the
one reported by the stat operation. Now, the stat call will always determine
the actual size of what would be read. However, it isn't guaranteed that this
size doesn't change in the time after the stat operation and before the read
operation.
Ref #4032
The service is loaded dynamically VBoxSharedClipboard.so at runtime. The
VFS configuration mounts the shared object at /VBoxSharedClipboard.so as
the file is checked by contrib code before loading. An init
configuration in pkg/vbox6/runtime illustrates this and how to re-label
the VBoxSharedClipboard.so ROM to its real name
virtualbox6-sharedclipboard.lib.so.
During Windows 10 boot with sequential block requests, the AHCI request
worker finished earlier than the EMT thread signals hEvtProcess and
begins waiting for hEvtProcessAck indefinitely. The timeouts helps to
survive this short phase.
A better solution would use conditional variables, which are not
provided in VirtualBox's runtime.
This patch introduces a C API to be used by input drivers to generate
Genode events. The initial version is limited to multitouch events only.
Fixes#4273
* Use the architecture-dependent minimal alignment for all allocations,
e.g. on ARM it is necessary to have cacheline aligned allocations for DMA
* Remove the allocation functions without alignment from generic API
* Fix a warning
Fix#4268
After a DMA transaction do only invalidate cachelines from the
corresponding DMA buffers if data got transfered from device to
CPU, and not vice versa. Otherwise it might result in data corruption.
Ref #4268
The former implementation did not internally track ROM changes notified
vs. delivered to the client. We adapt the versioning implementation
implemented in dynamic_rom_session.h and enable explicit notification of
the current version.
The feature is used by the clipboard to notify permitted readers of the
clipboard ROM service on focus change via the newly created private
Rom::Module::_notify_permitted_readers() function.
Fixes#4274
The includes for the address-space-ID allocator and the translation table are
usually specific to the CPU in use. Therefore these includes can be moved from
their current location in the board header to the CPU headers. This reduces the
number of decisions a board maintainer has to make if the CPU model he's aiming
for is already available.
This can probably also be applied for other includes in the board headers but I
intentionally leave it for a future commit as I don't have the time to do it
all now.
Ref #4217
For base-hw Core, we used to add quite some hardware-specific include paths
to 'INC_DIR'. Generic code used to include, for instance, '<cpu.h>' and
'<translation_table.h>' using these implicit path resolutions. This commit
removes hardware-specific include paths except for
1) the '<board.h>' include paths (e.g., 'src/core/board/pbxa9'),
2) most architecture-specific include paths (e.g., 'src/core/spec/arm_v7'),
3) include paths that reflect usage of virtualization or ARM Trustzone
(e.g., 'src/core/spec/arm/virtualization').
The first category is kept because, in contrast to the former "spec"-mechanism,
the board variable used for this type of resolution is not deprecated and the
board headers are meant to be the front end of hardware-specific headers
towards generic code which is why they must be available generically via
'<board.h>'.
The second category is kept because it was suggested by other maintainers that
simple arch-dependent headers (like for the declaration of a CPU state) should
not imply the inclusion of the whole '<board.h>' and because the architecture
is given also without the former "spec"-mechanism through the type of the build
directory. I think this is questionable but am fine with it.
The third category is kept because the whole way of saying whether
virtualization resp. ARM Trustzone is used is done in an out-dated manner and
changing it now would blow up this commit a lot and exceed the time that I'm
willing to spend. This category should be subject to a future issue.
Ref #4217
The 'src/core/board/<board>/board.h' header is thought as front end of
hardware-specific headers of a given board towards the generic base-hw Core
code. Therefore it leads to problems (circular includes) if the board.h header
is included from within another hardware-specific header.
If hardware-specific headers access declarations from namespace Board in a
definition, the definition should be moved to a compilation unit that may
include board.h. If hardware-specific headers access declarations from board.h
in a declaration, they should either use the primary declaration from the
original header or, if the declaration must be selected according to the board,
another board-specific header should be introduced to reflect this abstraction.
This is applied by this commit for the current state of base-hw.
Ref #4217
It is not necessary to have a class, an object, and a generic header for the
perfomance counter. The kernel merely enables the counter using cpu registers
('msr' instructions, no MMIO) on arm_v6 and arm_v7 only. Therefore this commit
makes the header arm-specific and replaces class and global static object with
a function for enabling the counter.
Fixes#4217
Let the kernel's driver for the global IRQ controller be a member of the one
Kernel::Main object instead of having it as static variables in the drivers for
the local IRQ controllers. Note that this commit spares out renaming 'Pic' to
'Local_interrupt_controller' which would be more sensible now with the new
'Global_interrupt_controller' class. Furthermore, on ARM boards the commit
doesn't move 'Distributer' stuff to the new global IRQ controller class as they
don't have real data members (only MMIO) and can be instanciated for each CPU
anew. However, the right way would be to instanciate them only once in Main as
well.
Ref #4217
The unmanaged-singleton approach was used in this context only because of the
alignment requirement of the Core main-UTCB. This, however can also be achieved
with the new 'Aligned' utility, allowing the UTCB to be a member of the Core
main-thread object.
Ref #4217
It's sufficient to access the boot info only on kernel initialization time.
Therfore, it can remain completely hidden to the rest of the kernel inside
kernel/main.cc in the initialization function.
Ref #4217
This commit introduces the Kernel::Main class that replaces the former way of
initializing the kernel (former 'kernel_init' function) and calling the C++
kernel entry handler (former 'kernel' function). These two are now
'Main::initialize_and_handle_kernel_entry' and 'Main::handle_kernel_entry'.
Also reading the execution time of the idle threads was already moved to
'Main'. The one static Main instance is meant to successivly replace all the
global static objects of the base-hw kernel with data members of the Main
instance making the data model of the kernel much more comprehensible. The
instance and most of its interface are hidden in kernel/main.cc. There are only
rare cases where parts of the Main interface must be accessible from the
outside. This should be done in the most specific way possible (see main.h)
and, if possible, without handing out references to Main data members or the
Main instance itself.
Ref #4217
Normally, the board header can be found for each supported board under
'src/core/board/<BOARD>/board.h'. This was not the case for the board 'pc'
that was located under 'src/core/spec/x86_64/board.h'. The commit fixes this.
Ref #4217
The class name Core_thread in Kernel for the object of the first thread of
core is too generic as there can be an arbitrary number of threads in core
besides this one. Furthermore, creating a core thread has its own syscall
'new_core_thread' that isn't related in any way to Core_thread. Therefore
this commit introduces the more specific name Core_main_thread as replacement
for Core_thread.
Ref #4217
The function was only still used for reading the execution time of idle threads
of CPUs. Certainly, it is technically fine and more performant to read these
values directly from the kernel objects without doing a syscall. However,
calling cpu_pool() for it provides read and write access to a lot more than
only the execution time values. The interface via which Core directly reads
state of the kernel should be as narrow and specific as possible.
Perspectively, we want to get rid of the cpu_pool() accessor anyway. Therefore
this commit introduces Kernel::read_idle_thread_execution_time(cpu_idx) as
replacement. The function is implemented in kernel code and called by Core in
platform.cc.
Ref #4217
Apparently, there is no need for exposing the data members of Trace_source, so,
we sould better make them private before someone gets the impression that they
are meant to be accessed directly.
Ref #4217
Core used to read the kernel-reserved IRQs from the timer objects in the
kernel's CPU objects and the PIC class (inter-processor IRQ). Besides not
being "good style" to access a kernel object in Core, this becomes a problem
when trying to prevent CPU pool from being accessed via global functions.
As a solution, this commit extends the boot info to also carry an array of all
kernel-reserved IRQs.
Ref #4217
For the constructor of Kernel_object<T> there are two variants. One for the
case that it is called from Core where the kernel object (type T) must be
created via a syscall and one when it is called from within the kernel and the
kernel object can be created directly. Selecting one of these variants was done
using a bool argument to the constructor. However, this implies that the
constructor of Kernel_object<T> and that of T have the same signature in the
variadic arguments, even in the syscall case, although technically it would
then not be necessary.
This becomes a problem as soon as kernel objects created by Core shall receive
additional arguments from the kernel, for instance a reference to the global
CPU pool, and therefore stands in the way when wanting to get rid of global
statics in the kernel. Therefore, this commit introduces two constructors that
are selected through enum arguments:
! Kernel_object(Called_from_kernel, ...);
! Kernel_object(Called_from_core, ...);
Ref #4217
The various mapping methods are modelled after the requirements of
the Intel GPUs or rather the Mesa driver back end.
With upcoming support for other driver back ends, we need to
sequeeze their requirements in as well. For now hijack 'map_buffer'
to provide for specifying the kind of attributes the client needs.
For now all buffers mapped in the GGTT for Intel GPUs are treated
as RW.
Issue #4265.
This call allows for checking if the given execution buffer has been
completed and complements the completion signal. Initially the GPU
multiplexer always sent such a signal when the currently scheduled
execution buffer has been completed. During enablement of the 'iris'
driver it became necessary to properly check of sequence number.
In case of the Intel GPU multiplexer the sequence numbers are
continous, which prompted the greater-than-or-equal check in the
DRM back end. By hidding this implementation detail behind the
interface, GPU drivers are free to deal with sequence numbers any
way they like and allows for polling in the client, where the
completion signal is now more of a progress signal.
Issue #4265.
The current info implementation (as RPC) is limited in a few ways:
* The amount of data that may be transferred is constrained by the
underlying base platform
* Most information never changes during run time but is copied
nonetheless
* The information differs depending on the used GPU device and
in its current implementation only contains Intel GPU specific
details
With this commit the 'info' RPC call is replaced with the
'info_dataspace' call that transfers the capability for the dataspace
containing the information only. This is complemented by a client
local 'attached_info' call that allows for getting typed access to
the information. The layout of the information is moved to its own
and GPU-specific header file, e.g., 'gpu/info_intel.h'
Issue #4265.
Rather than using the dataspace capability directly, let the client
choose its own local identifier that is linked to the underlying
capability.
Fixes#4265.
Right now the warning about failure to forward packet from driver to
uplink RX connection reads:
"exception while trying to forward packet from driverto Uplink
connection TX"
Add missing space between "driver" and "to".
Issue #4264
32KB is a rather small value. The driver can cope with it now, but
it does not perform as well as it should. This visible especially
in scenarions like nic_router_flood where we still often hit
synchronous wait path. Bump the size to 256kB.
Issue #4264
The problem can be seen when running nic_router_flood scenarion on arm
qemu_virt boards. With the amount of data this scenario tries to send
the driver quickly complains it has failed to push data into TX VirtIO
queue. After this warning message is printed nothing really happens and
after a while the test scenario fails.
The fact that we can't write all available data to the device is not
unexpected. VirtIO queue size is slected at initialization time and we
don't change it during driver lifetime. It can be tweaked via driver
config, but this does not change the fact that we'll always be able to
produce more data packets than we have free space in the VirtIO queue.
IMO the expected behavior of the driver in such case should be to:
1. Notify the device there is data to process.
2. Wait for the device to process at least part of it.
3. Retry sending queued packets.
One could expect returning Transmit_result::RETRY from _drv_transmit_pkt
would produce such result. Unfortunately it seems that Uplink_client_base
treats RETRY return value as indication of link being down. It'll retry
sending the packet only after the device notifies it the link is once
again up. This is the reason why nothing happens when running
nic_router_flood on top of virtio_nic driver. The link never goes down
in this case so once we fill the TX VirtIO queue and tell the base class
to retry the send, we'll be stuck waiting for link up change event
which will never arrive.
To fix this problem, when sending a packet to the device fails, do a
synchrnonus TX VirtIO queue flush (tell device there is data to process
and wait until its done with it).
With this fix in place nic_router_flood test scenario passes on both arm
qemu_virt boards.
Issue #4264
The contents of those descriptor rings can be modified by the device.
Mark them as volatile so the compiler does not make any assumptions
about them.
Issue #4264
This commit contains features and buf fixes:
* Catch errors during resource allocation
* Because Mesa tries to allocate fence (hardware) registers for each
batch buffer execution, do not allocate new fences for buffer objects
that are already fenced
* Add support for global hardware status page. Each context additionally
has a per-process hardware status page, which we used to set the
global hardware status page during Vgpu switch. This was obviously
wrong. There is only one global hardware status page (set once during
initialization) and a distinct per-process page for contexts.
* Write the sequence number of the currently executing batch buffer to
dword 52 of the per-process hardware status page. We use the pipe line
command with QW_WRITE (quad word write), GLOBAL_GTT_IVB disabled
(address space is per-process address space), and STORE_DATA_INDEX
enabled (write goes to offset of hardware status page). This command
used to write to the scratch page. But Linux now uses the first
reserved word of the per-process hardware status page.
* Add Gen9+ WaEnableGapsTsvCreditFix workaround. This sets the "GAPS TSV
Credit fix Enable" bit of the Arbiter control register (GARBCNTLREG)
as described by the documentation this bit should be set by the BIOS
but is not on most Gen9/9.5 platforms. Not setting this bit leads to
random GPU hangs.
* Increase the context size from 20 to 22 pages for Gen9. On Gen8 the
hardware context is 20 pages (1 hardware status page + 19 ring context
register pages). On Gen9 the size of the ring context registers has
increased by two pages to 21 pages or 81.3125 KBytes as the IGD
documentation states.
* The logical ring size in the ring buffer control of the execlist
context has to be programmed with number of pages - 1. So 0 is 1 page.
We programmed the actual number of pages before, leading to ring
buffer execution of NOOPs if page behind our ring buffer was empty or
GPU hangs if there was data on the page.
issue #4260
* Wait for for completion before return from 'execbuffer2'. This makes
buffer execution synchronous.
* Because the Iris driver manages the virtual address space of the GPU
and creates one GEM context for each batch buffer we have to map/unmap
all buffer objects before and after batch buffer execution.
issue #4260
The lx_emul_virt_to_pages implementation initialized the page ref
counter only for the first page, leaving the remaining elements in
uninitialized state. This, in turn, rendered the Linux page_pool (as
used by the emac network driver) ineffective, ultimately leading the a
memory leak. The fix changes the call of 'init_page_count' to take the
loop variable as argument.
Issue #4225
Increased number of trace subjects since the test sporadically fails on
some platforms.
Also added a sanity check to print an error message in case we run into
the same issue again.
Fixesgenodelabs/genode#4261
This patch adds a switch to internal poll function in libssh that
allows to force this function to immediately return without actually
polling for data and in consequence processing this data. This switch
is used to avoid calling callback functions when flushing output
streams which caused locks due to recursive access to internal
ssh_terminal sessions registry.
Issue #4258
This patch allows to replace sftp packet read and write with
completely asynchronous versions needed to properly hook in existing
ssh_terminal implementation.
Issue #4258
Using 'alignas' in declarations might cause GCC to request for an
implementation of 'operator delete(void*, unsigned long, std::align_val_t)'
although it might actually never be called. This commit adds a dummy
implementation to 'cxx/new_delete.cc' that does nothing more than printing an
error to the log that a proper implementation is missing. This approach is
coherent with our treatment of other global delete operators.
Ref #4217
If one has an object X that has a minimum alignment requirement specified
through 'alignas' this requirement is normally inherited by objects that have
object X as member, and by those that have objects as member that have X as
member, and so on... . However, this chain used to get silently interrupted
(dropping the minimum alignment requirement to 8 again) at objects that are
managed with Genode::Reconstructible or Genode::Constructible. In order to fix
this, the commit ensures that Genode::Reconstructible (and therefore also
Genode::Constructible) has at least the minimum alignment requirement (using
'alignas') as the object it manages.
Ref #4217
The NIC router parses, stores and forwards DNS domain names from DHCP replies.
Yet the routers DHCP client used to not request DNS domain-name information on
DHCP requests. This caused DHCP servers to skip this information on their
replies although it was available. This commit fixes the issue by adding the
DNS domain name code to the request parameter list of requests from the routers
DHCP client.
The 'black_hole' component provides dummy implementations of common
session interfaces.
At this time, only the 'Audio_out' session is provided if enabled
in the configuration of the component:
<config>
<audio_out/>
</config>
Issue #3653
* Switch mesa support from DRI to gallium
Supported drivers are
- softpipe (Sebstian Sumpf)
- iris for Intel GPUs (Alexander Boetcher)
- etnaviv for Vivante GPUs (Josef Söntgen)
* Mesa's generated files are placed into 'contrib/mesa-<hash>/generated'
and are cloned per default from a separate Git repo in order to avoid
hash updates upon package build. In case you need to generate files
yourself use
! prepare_port mesa GENERATE_FILES=1
issue #4254
Support for iris and etnvaviv
* entaviv:
- libdrm on FreeBSD is not prepared for !PCI (and libc our is missing
<sys/pciio.h>
- missing <sys/types.h> include in xf86drmMode.c
- etnaviv relies on linux header files - dummy in $(INC_DIR)
- IOCTL FreeBSD ↔ Linux have swapped IO/OUT bit
- O_CLOEXEC differs between FreeBSD ↔ Linux
issue #4254
According to spec the tail pointer points to the next qword instructions
which will be used by the software.
p 1354, Doc Ref # IHD-OS-BDW-Vol 2c-11.15
issue #4254
Superpages (2M, 1G) are not supported by now, but partially copied over code
from base-hw was around. Remove unused register definitions and remove
non-working super page code do avoid confusion.
issue #4254
Size argument of ggtt free range check is ignored, which leads to
overlapping allocations inside, which leads to unavailable IO-MEM exceptions
thrown by core.
issue #4254
Both, trace_logger and vfs_trace had their own trace_buffer.h. This
commit consolidates the existing implementations and provides the
resulting trace_buffer.h at 'include/trace/'. It thereby becomes part of
the trace api archive.
genodelabs/genode#4244
Driver code such as mfd-core.c may pass 0 as argument n to kcalloc,
which eventually results in an allocation size 0.
res = kcalloc(cell->num_resources, sizeof(*res), GFP_KERNEL);
Since 'res' is checked against NULL for success, kmalloc must not return
a NULL pointer in this case. The patch works around this issue by
forcing an allocation size of 1 byte in this case.
Issue #4253
Clock providers such as drivers/clk/sunxi-ng/ccu-sun8i-r.c don't use
regular init calls but declare their init functions via CLK_OF_DECLARE,
which fill the __clk_of_table. Linux populates the table statically by
using special sections declared in the linker script. In contrast, we
populate the table by expanding the macro to global constructor
functions.
The __clk_of_table is then processed by the call of of_clk_init(NULL).
Issue #4253
* Disable trace source and release ownership on subject destruction.
* Note, since the policy module is also destroyed on descruction of the
session component, the traced component must not access the policy
module when acknowledging the disabled state (else: page fault).
Fixesgenodelabs/genode#4247
If the trace subjects are not properly destructed when the TRACE client
disappears, enabled sources will be owned by a non-existing client.
In other words, when a TRACE client disappears all sources owned by the
client must be disabled.
genodelabs/genode#4247
test-trace always passed, although tracing was never enabled because the
trace subject was not within the first 32 subjects.
* increase number of queried subjects
* output error if trace subject was not found
genodelabs/genode#4247
With this commit, the NIC router DHCP client reads out the first DNS domain
name (DHCP option 15) if any from a DHCP reply that generates an IPv4 config
for a domain and stores the name together with the IPv4 config for that domain.
DNS domain names are reported via the new report tag '<dns-domain>' if the
'config' attribute in the config tag '<report>' is set.
Furthermore, the NIC router DHCP server becomes able to obtain a DNS domain
name from another domain that has a DHCP client dynamically (given the config
attribute 'dns_config_from' is set and no static DNS config is given) or
statically from its configuration (new config tag '<dns-domain>') and propagate
this name with DHCP replies (DHCP option 15).
The 'nic_router_dhcp_*' tests are adapted to test the new feautures.
The commit also gets rid of some mirrored files in
'test/nic_router_dhcp/manager'.
Fixes#4246
WARNING: BREAKS CONFIG COMPATIBILITY!
This commit changes the configuration interface of the NIC router in a way that
may break systems that use the component without proper adjustment!
How to adjust:
At each occurrence of the 'dns_server_from' attribute in a NIC router
configuration replace the attribute name with 'dns_config_from'. The attribute
value remains unaltered.
DETAILED DESCRIPTION
The new attribute name 'dns_config_from' reflects that also other aspects of
the DNS configuration of the denominated domain are used by the DHCP server
that holds the attribute. This commit is a preparation for forwarding also the
domain name (DHCP option 15) with the mechanism behind the attribute.
Ref #4246
The fact that the IPv4 config was a struct with all data members public was a
mere leftover of an early state of the NIC router. Today, the router
implementation style is to avoid structs and public data members wherever
possible.
This commit slightly changes the behavior of the router regarding log output.
The router used to print malformed IPv4 configurations to the log only if
the 'verbose' config flag was set using this style:
! [my_domain] malformed dynamic IP config: interface 10.0.2.1/24 ...
Now, malformed IPv4 configurations are only printed if the
'verbose_domain_state' config flag is set (like with any IP4v configuration
states) using this style:
! [my_domain] dynamic IP config: malformed (interface 10.0.2.1/24 ...)
Fixes#4242
The NIC router DHCP server used to add an extra option 6 field to DHCP replies
for each DNS server address. This conflicts with RFC #2132 section 3.8 which
states that the addresses should be listed within one option 6 field without
delimiter. The discrepancy is fixed by this commit.
Ref #4242
File size must be the same as the number of bytes that can be read from
the file. Otherwise, this will trigger a `Truncated_during_read`
exception.
Fixesgenodelabs/genode#4240
Via a new configuration attribute, the user can decide whether the router
should answer dropped fragmented IPv4 with an ICMP "destination unreachable"
packet and, if so, which value the ICMP code field of this packet should have.
The default is that the router doesn't send such responses (silently dropping
fragmented IPv4). The behavior is tested by the 'nic_router_ipv4_fragm' test.
Fixes#4236
If the new attribute 'dropped_fragm_ipv4' of the <report> tag in the NIC router
config is set "yes", the router will report the number of packets that were
dropped per interface respectively domain because fragmented IPv4 is not
supported. The default is not to report the counter. The behavior is tested by
the 'nic_router_ipv4_fragm' test.
Ref #4236
The NIC router used to ignore the IPv4 header fields "More fragments" and
"Fragment offset" completely. Therefore higher-level protocols of fragmented
IPv4 were interpreted wrong because each fragment was considered a self-
standing packet, expecting, for instance UDP/TCP headers somewhere inside of
the UDP/TCP data field. Normally, such packets were dropped as soon as the
UDP/TCP checksum check failed because of the misinterpretation. However,
it was also possible for fragmented IPv4 to pass the router although normally
only partially.
IPv4 fragmentation support in the router would introduce some potential
security risks and is presumably not an easy endeavor. So, for now, we settled
on not supporting IPv4 fragmentation. With this commit, the router simply drops
all fragmented IPv4. This is reflected to the log for each fragment as "drop
packet (fragmented IPv4 not supported)" when 'verbose_packet_drop="yes"' is
configured.
The new test 'run/nic_router_ipv4_fragm' is an automated test for this
behavior. The test is added to the autopilot list.
Ref #4236
- remove redundant file system factory
- remove dead code block
The code was guarded by preprocessor directives checking whether the
contrib code define "_USE_MKFS" is 1. As "_USE_MKFS" is not set one
for our port of FAT, the code was never executed and can be removed.
- remove uneffective config attributes
Apparently, the former XML attributes to the plugin 'drive' and
'codepage' had no effect. I tested them in a scenario with the VFS
block server on a disk-image boot-module as back end. Regardless of
the 'drive' value, the block session label was always "0". Regardless
of the 'codepage' value, the FAT on the disk image succeeded to mount
when not using '--codepage' for 'mkfs.fat' and failed to mount when
using '--codepage' to specify a supported but foreign codepage for
'mkfs.fat' (e.g. "720").
Ref #4220
There was one global static constructor:
! namespace Fatfs { static Constructible<Platform> _platform; }
This caused applications that used the lib or the <fatfs> VFS plugin to end up
in an uncaught exception due to Genode::Component complaining that method
'construct' returned without executing pending static constructors if they
didn't call Genode::Env::exec_static_constructors().
As the use of Genode::Env::exec_static_constructors() is discouraged in Genode,
this commit rather moves the '_platform' object to the scope of the
initializing function and introduces a global static pointer to the object that
gets set by the initializing function. Although this prevents the exception, it
is, technically speaking even worse than the former solution as the new pointer
isn't checked for validity in contrast to the 'Constructible' object.
However, so far, I don't see a clean solution to this problem without the need
for Genode::Env::exec_static_constructors().
Fixes#4220
* the GPU multiplexer now offers the platform service to the Intel
framebuffer driver (driver_manager)
* ajdusted drivers_managed-pc to hand out resources to the GPU driver
* adjust quotas
issue #4233
The platform services is intented to be used by dde_linux's intel_fb_drv
in order to initlialize displays.
* implement and announce platform session
* limit accessible GTT and aperture of client to 64 MB
* forward display engine IRQs to platform client
* move all PCI resources to 'Igd::Resources' class in order to make them
accessible by the platform service and the GPU driver
* fix fence register allocation for id zero (return true)
issue #4233
For mesa-21 the client takes care and manages
the virtual address space of the vGPU by itself and the intel/gpu driver
can't add silently a guard page anymore. Move the patch to the drm/ioctl
of the former mesa version.
Issue #4148#4233
_unmap_dataspace_ggtt requires the cap of Ggtt::Mapping (ring_map, ctx_map)
in order to find the right metadata and to free up the ggtt entries. Also the
pte range is removed already if the metadata was found.
Issue #4148#4233
BREAKS CONFIG COMPATIBILITY:
This commit changes the configuration interface of the NIC router in a way that
may break systems that use the component without proper adjustment!
HOW TO ADJUST:
At each occurrence of the '<uplink ...>' tag in a NIC router configuration
replace the tag name 'uplink' with 'nic-client'. The rest of the tag stays the
same.
The term "uplink" for network interfaces in the router that have a NIC session
client as back end was introduced in a time when Uplink sessions didn't yet
exist. Now, they do and, although both an uplink and an Uplink session
normally describe a network session between router and network device driver,
they are based on two different service types (NIC and Uplink). This can easily
cause confusion when integrating the router (the <uplink> is not related to
Uplink sessions) or trying to understand its functioning (an 'Uplink' object
has nothing to do with the Uplink service).
Therefore, this commit introduces the more specific term "NIC client" for an
interface that is based on a NIC session requested by the router. This doesn't
imply any semantic changes at the NIC router. However, the commit also brings a
broader update of the router's README and removes the term "downlink" that was
used only in documentation to refer to interfaces backed by a NIC session
provided by the router. The term was only associated with this meaning because
it is the natural counterpart to an uplink. This isn't appropriate anymore as
the terms for interface types have moved to a more technical level.
The commit adjusts all scenarios in the basic Genode repositories properly.
Fixes#4238
An interface that received a signal for a link-state change accessed its
domain reference without assuming that it could not be attached to a domain
at that moment. This caused the NIC router to crash with an uncaught exception
of type 'Net::Pointer<Net::Domain>::Invalid'. The commit adds a catch
directive for this exception resulting in the handler doing nothing if not
attached to any domain.
Fixes#4222
The test script failed during preparation of the on-target execution for
USB Armory with the following error:
! can't read "tz_vmm_block_irq": no such variable
Presumably, the script wasn't run anymore since the introduction of the
'tz_vmm_block_irq' variable for i.MX53 QSB. As we do not have infrastructure
for automated testing of the USB Armory and there seems to be not much
interest in using Genode on this platform, this commit simply removes the
support from the script.
Filtering boards in a run script by specs isn't the right way anymore (the
specs do not exist anymore). Nowadays, we have to use [have_board] instead.
Ref #4229
For unknown reasons, the former 'wget genode.org' call, that was meant to test
network in the Trustzone guest on imx53_qsb_tz, didn't succeed anymore although
the same call succeeded on my Sculpt VM Linux. However, 'ping 1.1.1.1' still
works, so, the script now uses this as test for networking instead.
Fixes#4229
So far, in order to create an ARP reply, the NIC router merely created a copy
of the corresponding ARP request and modified only those values that differ.
This approach has the disadvantage of re-using bad parameters from a broken
request. The specific use-case that made this visible was an early version of
the Pine board network driver that used to forward ARP requests with a greater
size than required. The ARP replies of the router re-used this size and
confused other network nodes with that. In general, the NIC router should
rely on the data of incoming packets the least possible. Therefore, with this
commit, the router creates a new ARP reply from scratch and uses only those
values required from the corresponding ARP request.
Fixes#4235
The former declaration of the IPv4 packet did not only use the questionable
tool of implementation-defined C++ bitsets but also lacked access to flags
"don't fragment" (DF) and "more fragments" (MF). This commit replaces the
C++ bitsets by using the register framework and introduces accessors for the
missing flags.
Ref #4236
This commit introduces a C-API to the Uplink session, as well as to
serve as a Block service. It can be used by drivers ported from
C-only projects, like the Linux kernel, or BSD kernels for instance.
Fix#4226
The re-newed approach currently supports ARM 64-bit only.
It depends on the Platform API of the ARM architecture.
It tries to meet the original semantic of the Linux kernel
functions as far as possible. To achieve this, device drivers
using this library should reference the original Linux kernel
headers at foremost. Only the headers in `src/include/lx_emul/shadow`
have to shadow clone the original ones.
Fix#4225
skb_push() already increases the skb->len by ETH_HLEN, hence adding
ETH_HLEN to the packet_size is redundant.
A too large packet size becomes a problem for large MTUs. With a maximum
MTU of 1500, adding ETH_HLEN twice will lead to a packet size of 1528.
Since this is larger than what we expect for good-old Ethernet (max. 1522),
some clients (e.g. the e1000 model in vbox5) may drop these packets.
Fixesgenodelabs/genode#4228
I discovered thinkbroadband.com requires the User-Agent header field and
rejects requests missing it with HTTP response code 403 "access to the
requested resource is forbidden". Now, fetchurl always adds the
User-Agent header fetchurl/LIBCURL_VERSION.
Also the error message now contains the HTTP response code.
The symlink implementation wrongly constructed a 'Sync' object within
the context of a monitor call. The 'Sync' constructor indirectly
depended on libc I/O for obtaining the current time, ultimately
resulting in a nested attempt of a monitor call. This could be
reproduced via the base.run script:
$ cd /home
$ ln -s a b
The 'ln' command resulted in the following log message:
[init -> /bin/bash -> 7] Error: deadlock ahead, mutex=0x10ff8c70, return ip=0x500583a7
The patch fixes the problem by splitting the single monitor call into
two monitor calls and moving the construction of the 'Sync' object
in-between both monitor calls, thereby executing the constructor at the
libc application level.
Fixes#4219
Building the elfloader in kernel-sel4.inc has a problem with Genodes CCACHE
make variable. When issuing ...
! ./tool/depot/create mstein/bin/*/base-sel4-* CCACHE=yes
..., building the elfloader used to consume all memory of the host system and
then run into a segmentation fault:
! make[6]: *** [elfloader/elfloader.o] Segmentation fault (core dumped)
This is because the other build system invokes the CCACHE variable as a command
in front of the compiler command. If CCACHE is set to 'yes', the 'yes' command
is called and produces an endless output into some output file. The problem
can be fixed by locally re-setting the CCACHE variable for the
'make ... elfloader' command to 'ccache' (Genode CCACHE==yes) or '' (Genode
CCACHE!=yes).
Fixes#4212
Adds try-catch-statement with diagnostic errors in Dhcp_server::free_ip in
order to guard against exceptions from the underlying bit allocator. These
exceptions should never happen given that the router is programmed correctly
and always feeds Dhcp_server::free_ip with sane arguments (which it should).
However, should this not be the case, we can assume that the failed IP freeing
indicates that the IP isn't allocated anyway and it's fine to continue using
the router. Furthermore, IP allocations are a mere client service and not
relevant for the integrity or safety of the router.
Ref #4200
When Interface::handle_config_3 (third step of applying a new configuration to
interfaces) tried to detach the interface from the current IP config because
the old and new IP config differed, it did so using the new domain. The former
steps of the reconfiguration already installed the new domain reference at the
interface. Therefore, also the DHCP server of the new domain was used. This,
however caused uncaught exceptions because detaching from an IP config
includes dissolving all DHCP allocations. This dissolving of DHCP allocations
now operated on a DHCP server (the one of the new domain) that wasn't related
to the allocations and, in the worst case, caused an uncaught exception
because the IPs were out of its range.
That said, this commit ensures that detaching an interface from an IP config
is always done on the domain from which the IP config originated. Normally,
this is the domain the interface is attached to. But in the case of
Interface::handle_config_3, it is another - the former domain the interface
was attached to.
The commit also adapts the nic_router_dhcp_* tests in a way that they
reconfigure the router in a way that would trigger the uncaught exception
without the fix.
Fixes#4200
Introduce two new cache maintainance functions:
* cache_clean_invalidate_data
* cache_invalidate_data
used to flush or invalidate data-cache lines.
Both functions are typically empty, accept for the ARM architecture.
The commit provides implementations for the base-hw kernel, and Fiasco.OC.
Fixes#4207
The implementation conflicted with the implicit declaration of bzero:
.../repos/dde_bsd/src/lib/audio/mem.cc: In function ‘void bzero(void*, size_t)’:
.../repos/dde_bsd/src/lib/audio/mem.cc:377:2: warning: ‘nonnull’ argument ‘b’ compared to NULL [-Wnonnull-compare]
Adapts Dir_file_system::open_composite_dirs in a way that it returns "success"
when the leaf node of the path is an empty directory but "lookup failed", as
usual, if one of the other directories on the way to the leaf node is empty.
I couldn't find a technical reason why we used to return "lookup failed" when
only the leaf node was empty.
The commit also adds a test for en empty root directory and empty
sub-directories to the fs_query run script.
Fixes#4198
The fs_query component used to exit with an uncaught exception if a queried
directory didn't exist. Now, fs_query will catch this event and simply skip the
affected query, thereby indicating to the user the inexistence of the
queried directory.
Ref #4032
- Patch the XHCI model in order to handle frame wrapping correctly. For
this adjust 'mfindex_kick' to the correct period (same, before, or after
'mfindex').
- Flush EP when it is stopped, this causes all pending packets for the EP
to be acked. Correct counting of packets in flight.
- Add BEI patch by Josef.
issue #4196
- API packages for: libusb, libuvc, and libyuv
- Source packages for: API packages + USB webcam app
- Meta package for USB webcam
- Raw package for USB webcam configuration
issue #4196
Unfortunately, our current implementation of 'wmb()' doesn't seem to do what we
want it to do. On base-hw + imx6q_sabrelite, the write of bdp->cbd_sc seems to
get re-ordered after the write to txq->bd.reg_desc_active in the transmission
path of the contrib code. Due to this, the transmission of the packet is only
triggered the next time a packet is sent. However, we only quick-fix it by
enforcing the execution of the write with a volatile global read as we will
soon update the FEC NIC port with a new DDE approach anyway.
Fixes#4010
In ROM mode the global CapsLock state is controlled by the capslock ROM
by virtual KEY_CAPSLOCK events.
Guests are easily confused by spurious KEY_CAPSLOCK input events in
caps="rom" mode. These spurious events may reach the VMM if KEY_CAPSLOCK
is not pressed as first key in a combination and, therefore, is not
filtered as global key. We filter KEY_CAPSLOCK in ROM mode in the VMM
explicitly, but let it pass in non-ROM mode.
Per default RAW mode is used and CapsLock key events are sent unfiltered
to the guest.
Enable watching files via the inotify interface of the Linux Kernel.
Delivery of watches to components is staggered in order to prevent an
overflow of the ACK queue in cases when a lot of changes are made to the
file system from the Linux side.
Fixes#4070
Guests are easily confused by spurious KEY_CAPSLOCK input events in
caps="rom" mode. These spurious events may reach the VMM if KEY_CAPSLOCK
is not pressed as first key in a combination and, therefore, is not
filtered as global key. Now, we filter KEY_CAPSLOCK in ROM mode in the
VMM explicitly, but let it pass in non-ROM mode.
Fixes#4087
Because qemu-usb allocated host devices after 'USB_HOST_DEVICE' in the
object array and 'USB_WEBCAM' is loacated after 'USB_HOST_DEVICE' the
webcam model can overwrite an already allocated pass-through device. As
a solution add the 'USB_FIRST_FREE' to make it clear from where host
devices can be allocated. Also increase the number of supported host
devices to eight.
fixes#4182
If no window has ever been focused, next() always returns an invalid
window id. As a consequence, there is no way to cycle through the focus
history without an explicit focus event (e.g. mouse hover).
Instead, next() should return the first window from the focus history if the
currently focused window is not present.
Fixesgenodelabs/genode#4164
The wpa_supplicant refuses to set the BSSID in case it is quoted.
Removing the quotes allows for specifying the BSSID in the
configuration.
Fixes#4175.
A reset domain can consist of one or several reset-pins
denoted by name that are assigned to a device.
When the device gets acquired via the Platform RPC API,
the pins are de-asserted, and asserted again when the
device gets released.
A configuration looks like the following:
<device name="mipi_dsi>
<reset-domain name="mipi_dsi_pclk"/>
...
</device>
Fixes#4171
Introduces the notion of a transaction that consists of one or more
messages. Whereby a message has a read or write direction and consists
of one or more bytes.
Issue #4170Fixes#4169
Report via platform_info the capabilities of the kernel, e.g. ACPI and MSI.
With the commit the try-catch pattern on IRQ session creation by the platform
driver is avoided.
Issue #4016
- Do not perform desctruction on report updatea in EP because
'unregister_device' may block on Led state 'update' (synchronous
control message) leading to the driver being stuck because no more
signals are received
- Check if device is present in 'submit_urb' calls
fixes#4166
- Signal device ready depending on state (ready or not) immediately or
when "actconfig" is set
- Report new devices when ready
- Drain packet stream in case there is no device present (needed for
synchronous operations at client side)
- Do not use 'session_device' on device destruction, check pointer
directly instead
issue #4149
Adds the <new-file> operation to the fs_tool. When configured, the
<new-file path="...">...</new-file> tag will cause creation or overwriting of
the file given through the 'path' attribute. The file will contain the text
content of the tag.
Ref #4032
This patch moves the utility from the app/text_area to os/vfs.h to make
it easier to use by other components. By hosting the 'New_file' as a
friend alongside the 'Directory', we can now pass a 'Directory' as
constructor argument, which is consistent with other utilities such as
'File_content'.
As a further improvement, the new version supports the implicit creation
of the directory hierarchy leading to the new file.
Issue #4032
Mapping normal memory bufferable restores support for unaligned reads on
DMA memory and prevents the following errors on imx6q_sabrelite.
KERNEL0: alignment error at 18003061 (PC: 0102e3f8, SP: 401ffb18, FSR: 90000001, PSR: 20000110)
Issue #4094
Issue #4157
By adding an attribute 'size="yes"' to a query, one instructs fs_query to
report also the size of each queried file as attribute 'size' of the
corresponding 'file' node.
Ref #4032
The fs_query component used to try watching all files it found resulting in
errors on files that are not watchable. For some files, however, the watch-
feature doesn't make sense as they are not readable (no content, no size).
Now, fs_query will check first whether a file is readable and skip watching
if it isn't.
Ref #4032
When configuring fs_query to print the content of files it used to try so for
all files it found resulting in errors on files that are not readable. Now,
fs_query will check first whether a file is readable and skip printing the
content of those that are not.
Ref #4032
Managing ssh event file descriptors was performed from two different
threads which could cause reallocation of structure used in other thread
in a call to 'poll' function.
Splitted initialization to parts and moved ssh event part into ssh loop.
Issue #4095
Moved creating ssh loop thread after initialization of wake up server
file descriptors to make sure that they will be properly handled even in
first loop run.
Issue #4095
After update of stdcxx, either hardware (CPU) random sources are taken
or, if not available/insufficient, /dev/urandom is used.
Issue #3967
Issue #4094
For fs_file_systems, reads are limited to the size of the packets from the
File_system session. Hence, we cannot read the large files in one go.
This fix is particularly helpful for fonts_fs, as it enables including font
files from a File_system.
genodelabs/genode#4135
Comment in Linux sources:
Since an ethernet header is 14 bytes network drivers often end up with
the IP header at an unaligned offset. The IP header can be aligned by
shifting the start of the packet by 2 bytes. Drivers should do this
with:
skb_reserve(skb, NET_IP_ALIGN);
This is ensured when using netdev_alloc_skb_ip_align().
Issue #4094
This patch takes advantage of block transfer interrupts on Intel XHCI
controllers which is used during isochronous transfers. Because of a bug
in hardware (see usb_host_isoc_bei.patch header), this feature has been
disabled for Intel leading to up to 8000 interrupts/s for isochronous
transfer causing severe CPU consumption on Genode. With this commit we
lower host driver consumption to normal levels.
issue #4149
A Lx::Task is now associated to a USB device not the session any more.
This implies that a task lives as long as the device making it possible
to gracefully handle outstanding requests (i.e., synchronous) in case
the session has been closed.
issue #4149
The old port version contained '*.ali' files that were build with an older GCC
which led to problems when compiling packages that use the port with the new
GCC 10. The '*.ali' files of the new port version were generated with GCC 10.
Fixes#4145
When loading shared libraries via the 'Shared_object' interface display
all additionaly loaded libraries in case 'ld_verbose' is configured. Up
until now, only the loaded library was displayed. In order to determine
if a dependend library had arlready been loaded prior to loading the
'Shared_object' the reference counter is used.
fixes#4147
The default size is on most kernels 512M. On OKL4 we have to use 800M,
because of the statically configured memory ranges in the OKL4 kernel.
By avoiding a specific amount of memory, the default Qemu memory sizes are
used.
Issue #4095
Revert GNU ld to the old behavior where sections with the same name in multiple
ld scripts are merged. Binutils 2.36.1 creates two sections with the same name.
Fixes#4126
Download gmp, mpc and mpfr with the download script provided by the
gcc source tree and let the gcc build system handle the build of these
libraries with the correct compile options. This fixes build issues on
armhf Linux and removes the need to maintain mpc and mpfr ports in
the Genode tree.
Issue #4094
This patch fixes a GCC-10 compile error. Even though the optimization
was quite effective - I measured a speedup of factor 2 - it is not all
that important for the overall application performance. In the nano3d
case, we are talking about 1 vs. 2 percent of CPU time.
Fixes#4140
The kernel-agnostic 'Trace::timestamp' function for arm_64 executes the
'mrs %0, pmccntr_el0' instruction, which is not permitted for user-level
programs on Linux. This patch shadows the generic timestamp.h header
with dummy that returns zero. This return value prompts the timeout
framework to disable the interpolation of time based on timestamps. This
avoid the illegal-instruction abort but comes with two llimitations:
First, time measurements are effectlively limited to a granulariry of 1
millisecond (deliberately constrained by the timer driver).
The quirk is applied when using the base-linux API. Should a generic
application (that uses the base API only) call 'Trace::timestamp'
directly, the illegal instruction is executed.
Issue #4136
This patch adds support for running Genode/Linux on the AARCH64
architecture.
- The kernel-agnostic startup code (crt0) had to be extended to
capture the initial stack pointer, which the Linux kernel uses
to pass the process environment. This is in line with the
existing startup code for x86_32 and x86_64.
- The link order of the host libraries linked to lx_hybrid
programs had to be adjusted such that libgcc appears at last
because the other libraries depend on symbols provided by
libgcc.
- When using AARCH64 Linux as host, one can execute run scripts
via 'make run/<script> KERNEL=linux BOARD=linux' now.
Issue #4136