This new version of the system_clock pkg does no longer depend on the
presence of an external 'Rtc' service as previously provided by the
Sculpt base system. Instead, it hosts the rtc_drv inside the subsystem.
Because rtc_drv is board-dependent, the system_clock pkg is named
system_clock-pc now.
Issue #4281
Some guests don't handle remote wake up correctly causing devices to
stop functioning. Therefore, we disable the remote wake up bit (5) in
`bmAttributes` of the device configuration descriptor.
Thanks to Peter for the initial fix.
Fixes#4278
ROM clients have to request an initial update of dynamic ROMs explicitly
and should not depend on artificial signals from the ROM session on
signal-handler registration.
Issue #4274
The sequence app should immediately stop the child when it called
parent().exit(). Otherwise, the child will continue execution which
causes a race condition: The child's ld.lib.so will eventually destruct
an Attached_rom_dataspace for the config rom. If sequence destructed the
corresponding service first, we will get an Ipc_error.
genodelabs/genode#4267
Warning!
The current version of the file vault is not thought for productive use but
for mere demonstrational purpose! Please refrain from storing sensitive data
with it!
The File Vault component implements a graphical frontend for setting up and
controlling encrypted virtual file systems using the Consistent Block Encrypter
(CBE) for encryption and snapshot management. For more details see
'repos/gems/src/app/file_vault/README'.
Fixes#4032
Previously unconditional calls to Genode::log in cbe init and the cbe trust
anchor VFS plugin were made dependent on a verbosity flag that is set to
"false" by default.
Ref #4032
Instead of simply encrypting the private key with AES-256 when storing it to
the 'encrypted_private_key' file, wrap it using the AES-key-wrap algorithm
described in RFC 3394 "Advanced Encryption Standard (AES) Key Wrap Algorithm".
This is more secure and enables us to directly check whether the passphrase
entered by the user was correct or not.
Ref #4032
As the file formerly named 'secured_superblock' actually contains the hash of
the superblock that was secured, it was renamed 'superblock_hash'.
Ref #4032
As the file formerly named 'keyfile' actually contains the encrypted private
key of the Trust Anchor, it was renamed 'encrypted_private_key'.
Ref #4032
By now, the symmetric keys were only XOR'ed with the private key as placeholder
for a real encryption. Now they are encrypted using AES256 with the TA's
private key as key
Ref #4032.
A private key of 256 bits is generated pseudo-randomly using the jitterentropy
VFS plugin on initialization. The private key is stored in the key file
encrypted via AES256 using the SHA256 hash of the users passphrase. When
unlocking the CBE device, the encrypted private key is read from the key file
and decrypted with the hash of the users passphrase.
Ref #4032
Instead of using the user passphrase directly, use its SHA256 hash calculated
using libcrypto. The passphrase hash is still stored in the key file to be
used as base for the very primitive way of generating the private key.
Ref #4032
Closing the keyfile handle after a write operation wasn't synchronised to the
actual end of the write operation.
Issuing a write operation at the back end returns successfull as soon as the
back end has acknowledged that it will execute the operation. However, the
actual writing of the data might still be in progress at this point. But the
plugin used to close the file handle and declare the operation finished at this
point which led to warnings about acks on unknown file handles and leaking
resources. Now, the plugin issues a sync operation directly after the write
operation and waits for the sync to complete. This ensures that the plugin
doesn't declare the operation finished too early.
Ref #4032
The unlocking operation in the trust anchor was broken wich caused bad keys in
the CBE. This rewrites the whole operation to work as desired. Note that this
doesn't make it more safe! The private key is still almost the same as the
passphrase and stored plaintext.
Ref #4032
The plugin used to close file handles via the 'vfs_env.root_dir.close'.
However, this lead to resource leaks and apparently isn't the right way to
do it. Other VFS plugins do it by calling 'close' directly on the handle and
doing it in the trust anchor plugin also, fixes the leaks.
Ref #4032
Closing the hashfile handle after a write operation wasn't synchronised to the
actual end of the write operation.
Issuing a write operation at the back end returns successfull as soon as the
back end has acknowledged that it will execute the operation. However, the
actual writing of the data might still be in progress at this point. But the
plugin used to close the file handle and declare the operation finished at this
point which led to warnings about acks on unknown file handles and leaking
resources. Now, the plugin issues a sync operation directly after the write
operation and waits for the sync to complete. This ensures that the plugin
doesn't declare the operation finished too early.
Ref #4032
There were no means for issuing a Deinitialize request at the CBE using the
CBE VFS plugin. The new control/deinitialize file fixes this. When writing
"true" to the file, a Deinitialize request is submitted at the CBE. When
reading the file, the state of the operation is returned as a string of the
format "[current_state] last-result: [last_result]" where [current_state] can
be "idle" or "in-progress" and [last_result] can be "none", "success", or
"failed".
Ref #4032
When discarding a snapshot, the CBE VFS plugin didn't communicate the ID of
the snapshot to the CBE. Instead it set the ID argument to 0. Therefore the
operation never had any effect.
Ref #4032
The snapshots file system used to return the number of snapshots on
'num_dirent' when called for the root directory although it was expected to
return 1. This confused the tooling ontop of the VFS.
Ref #4032
Despite being readable, the files control/extend and control/rekey proclaimed
that they were not when asked. This caused the fs_query tool to not report the
content of the files although it could have.
Ref #4032
Stat calls on the control/extend and control/rekey files returned a bogus file
size that led to an error in the VFS File_content tool. The tool complained
that the size of the file determined while reading the content differs from the
one reported by the stat operation. Now, the stat call will always determine
the actual size of what would be read. However, it isn't guaranteed that this
size doesn't change in the time after the stat operation and before the read
operation.
Ref #4032
The service is loaded dynamically VBoxSharedClipboard.so at runtime. The
VFS configuration mounts the shared object at /VBoxSharedClipboard.so as
the file is checked by contrib code before loading. An init
configuration in pkg/vbox6/runtime illustrates this and how to re-label
the VBoxSharedClipboard.so ROM to its real name
virtualbox6-sharedclipboard.lib.so.
During Windows 10 boot with sequential block requests, the AHCI request
worker finished earlier than the EMT thread signals hEvtProcess and
begins waiting for hEvtProcessAck indefinitely. The timeouts helps to
survive this short phase.
A better solution would use conditional variables, which are not
provided in VirtualBox's runtime.
This patch introduces a C API to be used by input drivers to generate
Genode events. The initial version is limited to multitouch events only.
Fixes#4273
* Use the architecture-dependent minimal alignment for all allocations,
e.g. on ARM it is necessary to have cacheline aligned allocations for DMA
* Remove the allocation functions without alignment from generic API
* Fix a warning
Fix#4268
After a DMA transaction do only invalidate cachelines from the
corresponding DMA buffers if data got transfered from device to
CPU, and not vice versa. Otherwise it might result in data corruption.
Ref #4268
The former implementation did not internally track ROM changes notified
vs. delivered to the client. We adapt the versioning implementation
implemented in dynamic_rom_session.h and enable explicit notification of
the current version.
The feature is used by the clipboard to notify permitted readers of the
clipboard ROM service on focus change via the newly created private
Rom::Module::_notify_permitted_readers() function.
Fixes#4274
The includes for the address-space-ID allocator and the translation table are
usually specific to the CPU in use. Therefore these includes can be moved from
their current location in the board header to the CPU headers. This reduces the
number of decisions a board maintainer has to make if the CPU model he's aiming
for is already available.
This can probably also be applied for other includes in the board headers but I
intentionally leave it for a future commit as I don't have the time to do it
all now.
Ref #4217
For base-hw Core, we used to add quite some hardware-specific include paths
to 'INC_DIR'. Generic code used to include, for instance, '<cpu.h>' and
'<translation_table.h>' using these implicit path resolutions. This commit
removes hardware-specific include paths except for
1) the '<board.h>' include paths (e.g., 'src/core/board/pbxa9'),
2) most architecture-specific include paths (e.g., 'src/core/spec/arm_v7'),
3) include paths that reflect usage of virtualization or ARM Trustzone
(e.g., 'src/core/spec/arm/virtualization').
The first category is kept because, in contrast to the former "spec"-mechanism,
the board variable used for this type of resolution is not deprecated and the
board headers are meant to be the front end of hardware-specific headers
towards generic code which is why they must be available generically via
'<board.h>'.
The second category is kept because it was suggested by other maintainers that
simple arch-dependent headers (like for the declaration of a CPU state) should
not imply the inclusion of the whole '<board.h>' and because the architecture
is given also without the former "spec"-mechanism through the type of the build
directory. I think this is questionable but am fine with it.
The third category is kept because the whole way of saying whether
virtualization resp. ARM Trustzone is used is done in an out-dated manner and
changing it now would blow up this commit a lot and exceed the time that I'm
willing to spend. This category should be subject to a future issue.
Ref #4217
The 'src/core/board/<board>/board.h' header is thought as front end of
hardware-specific headers of a given board towards the generic base-hw Core
code. Therefore it leads to problems (circular includes) if the board.h header
is included from within another hardware-specific header.
If hardware-specific headers access declarations from namespace Board in a
definition, the definition should be moved to a compilation unit that may
include board.h. If hardware-specific headers access declarations from board.h
in a declaration, they should either use the primary declaration from the
original header or, if the declaration must be selected according to the board,
another board-specific header should be introduced to reflect this abstraction.
This is applied by this commit for the current state of base-hw.
Ref #4217
It is not necessary to have a class, an object, and a generic header for the
perfomance counter. The kernel merely enables the counter using cpu registers
('msr' instructions, no MMIO) on arm_v6 and arm_v7 only. Therefore this commit
makes the header arm-specific and replaces class and global static object with
a function for enabling the counter.
Fixes#4217
Let the kernel's driver for the global IRQ controller be a member of the one
Kernel::Main object instead of having it as static variables in the drivers for
the local IRQ controllers. Note that this commit spares out renaming 'Pic' to
'Local_interrupt_controller' which would be more sensible now with the new
'Global_interrupt_controller' class. Furthermore, on ARM boards the commit
doesn't move 'Distributer' stuff to the new global IRQ controller class as they
don't have real data members (only MMIO) and can be instanciated for each CPU
anew. However, the right way would be to instanciate them only once in Main as
well.
Ref #4217
The unmanaged-singleton approach was used in this context only because of the
alignment requirement of the Core main-UTCB. This, however can also be achieved
with the new 'Aligned' utility, allowing the UTCB to be a member of the Core
main-thread object.
Ref #4217
It's sufficient to access the boot info only on kernel initialization time.
Therfore, it can remain completely hidden to the rest of the kernel inside
kernel/main.cc in the initialization function.
Ref #4217
This commit introduces the Kernel::Main class that replaces the former way of
initializing the kernel (former 'kernel_init' function) and calling the C++
kernel entry handler (former 'kernel' function). These two are now
'Main::initialize_and_handle_kernel_entry' and 'Main::handle_kernel_entry'.
Also reading the execution time of the idle threads was already moved to
'Main'. The one static Main instance is meant to successivly replace all the
global static objects of the base-hw kernel with data members of the Main
instance making the data model of the kernel much more comprehensible. The
instance and most of its interface are hidden in kernel/main.cc. There are only
rare cases where parts of the Main interface must be accessible from the
outside. This should be done in the most specific way possible (see main.h)
and, if possible, without handing out references to Main data members or the
Main instance itself.
Ref #4217
Normally, the board header can be found for each supported board under
'src/core/board/<BOARD>/board.h'. This was not the case for the board 'pc'
that was located under 'src/core/spec/x86_64/board.h'. The commit fixes this.
Ref #4217
The class name Core_thread in Kernel for the object of the first thread of
core is too generic as there can be an arbitrary number of threads in core
besides this one. Furthermore, creating a core thread has its own syscall
'new_core_thread' that isn't related in any way to Core_thread. Therefore
this commit introduces the more specific name Core_main_thread as replacement
for Core_thread.
Ref #4217
The function was only still used for reading the execution time of idle threads
of CPUs. Certainly, it is technically fine and more performant to read these
values directly from the kernel objects without doing a syscall. However,
calling cpu_pool() for it provides read and write access to a lot more than
only the execution time values. The interface via which Core directly reads
state of the kernel should be as narrow and specific as possible.
Perspectively, we want to get rid of the cpu_pool() accessor anyway. Therefore
this commit introduces Kernel::read_idle_thread_execution_time(cpu_idx) as
replacement. The function is implemented in kernel code and called by Core in
platform.cc.
Ref #4217
Apparently, there is no need for exposing the data members of Trace_source, so,
we sould better make them private before someone gets the impression that they
are meant to be accessed directly.
Ref #4217
Core used to read the kernel-reserved IRQs from the timer objects in the
kernel's CPU objects and the PIC class (inter-processor IRQ). Besides not
being "good style" to access a kernel object in Core, this becomes a problem
when trying to prevent CPU pool from being accessed via global functions.
As a solution, this commit extends the boot info to also carry an array of all
kernel-reserved IRQs.
Ref #4217
For the constructor of Kernel_object<T> there are two variants. One for the
case that it is called from Core where the kernel object (type T) must be
created via a syscall and one when it is called from within the kernel and the
kernel object can be created directly. Selecting one of these variants was done
using a bool argument to the constructor. However, this implies that the
constructor of Kernel_object<T> and that of T have the same signature in the
variadic arguments, even in the syscall case, although technically it would
then not be necessary.
This becomes a problem as soon as kernel objects created by Core shall receive
additional arguments from the kernel, for instance a reference to the global
CPU pool, and therefore stands in the way when wanting to get rid of global
statics in the kernel. Therefore, this commit introduces two constructors that
are selected through enum arguments:
! Kernel_object(Called_from_kernel, ...);
! Kernel_object(Called_from_core, ...);
Ref #4217
The various mapping methods are modelled after the requirements of
the Intel GPUs or rather the Mesa driver back end.
With upcoming support for other driver back ends, we need to
sequeeze their requirements in as well. For now hijack 'map_buffer'
to provide for specifying the kind of attributes the client needs.
For now all buffers mapped in the GGTT for Intel GPUs are treated
as RW.
Issue #4265.
This call allows for checking if the given execution buffer has been
completed and complements the completion signal. Initially the GPU
multiplexer always sent such a signal when the currently scheduled
execution buffer has been completed. During enablement of the 'iris'
driver it became necessary to properly check of sequence number.
In case of the Intel GPU multiplexer the sequence numbers are
continous, which prompted the greater-than-or-equal check in the
DRM back end. By hidding this implementation detail behind the
interface, GPU drivers are free to deal with sequence numbers any
way they like and allows for polling in the client, where the
completion signal is now more of a progress signal.
Issue #4265.
The current info implementation (as RPC) is limited in a few ways:
* The amount of data that may be transferred is constrained by the
underlying base platform
* Most information never changes during run time but is copied
nonetheless
* The information differs depending on the used GPU device and
in its current implementation only contains Intel GPU specific
details
With this commit the 'info' RPC call is replaced with the
'info_dataspace' call that transfers the capability for the dataspace
containing the information only. This is complemented by a client
local 'attached_info' call that allows for getting typed access to
the information. The layout of the information is moved to its own
and GPU-specific header file, e.g., 'gpu/info_intel.h'
Issue #4265.
Rather than using the dataspace capability directly, let the client
choose its own local identifier that is linked to the underlying
capability.
Fixes#4265.
Right now the warning about failure to forward packet from driver to
uplink RX connection reads:
"exception while trying to forward packet from driverto Uplink
connection TX"
Add missing space between "driver" and "to".
Issue #4264
32KB is a rather small value. The driver can cope with it now, but
it does not perform as well as it should. This visible especially
in scenarions like nic_router_flood where we still often hit
synchronous wait path. Bump the size to 256kB.
Issue #4264
The problem can be seen when running nic_router_flood scenarion on arm
qemu_virt boards. With the amount of data this scenario tries to send
the driver quickly complains it has failed to push data into TX VirtIO
queue. After this warning message is printed nothing really happens and
after a while the test scenario fails.
The fact that we can't write all available data to the device is not
unexpected. VirtIO queue size is slected at initialization time and we
don't change it during driver lifetime. It can be tweaked via driver
config, but this does not change the fact that we'll always be able to
produce more data packets than we have free space in the VirtIO queue.
IMO the expected behavior of the driver in such case should be to:
1. Notify the device there is data to process.
2. Wait for the device to process at least part of it.
3. Retry sending queued packets.
One could expect returning Transmit_result::RETRY from _drv_transmit_pkt
would produce such result. Unfortunately it seems that Uplink_client_base
treats RETRY return value as indication of link being down. It'll retry
sending the packet only after the device notifies it the link is once
again up. This is the reason why nothing happens when running
nic_router_flood on top of virtio_nic driver. The link never goes down
in this case so once we fill the TX VirtIO queue and tell the base class
to retry the send, we'll be stuck waiting for link up change event
which will never arrive.
To fix this problem, when sending a packet to the device fails, do a
synchrnonus TX VirtIO queue flush (tell device there is data to process
and wait until its done with it).
With this fix in place nic_router_flood test scenario passes on both arm
qemu_virt boards.
Issue #4264
The contents of those descriptor rings can be modified by the device.
Mark them as volatile so the compiler does not make any assumptions
about them.
Issue #4264
This commit contains features and buf fixes:
* Catch errors during resource allocation
* Because Mesa tries to allocate fence (hardware) registers for each
batch buffer execution, do not allocate new fences for buffer objects
that are already fenced
* Add support for global hardware status page. Each context additionally
has a per-process hardware status page, which we used to set the
global hardware status page during Vgpu switch. This was obviously
wrong. There is only one global hardware status page (set once during
initialization) and a distinct per-process page for contexts.
* Write the sequence number of the currently executing batch buffer to
dword 52 of the per-process hardware status page. We use the pipe line
command with QW_WRITE (quad word write), GLOBAL_GTT_IVB disabled
(address space is per-process address space), and STORE_DATA_INDEX
enabled (write goes to offset of hardware status page). This command
used to write to the scratch page. But Linux now uses the first
reserved word of the per-process hardware status page.
* Add Gen9+ WaEnableGapsTsvCreditFix workaround. This sets the "GAPS TSV
Credit fix Enable" bit of the Arbiter control register (GARBCNTLREG)
as described by the documentation this bit should be set by the BIOS
but is not on most Gen9/9.5 platforms. Not setting this bit leads to
random GPU hangs.
* Increase the context size from 20 to 22 pages for Gen9. On Gen8 the
hardware context is 20 pages (1 hardware status page + 19 ring context
register pages). On Gen9 the size of the ring context registers has
increased by two pages to 21 pages or 81.3125 KBytes as the IGD
documentation states.
* The logical ring size in the ring buffer control of the execlist
context has to be programmed with number of pages - 1. So 0 is 1 page.
We programmed the actual number of pages before, leading to ring
buffer execution of NOOPs if page behind our ring buffer was empty or
GPU hangs if there was data on the page.
issue #4260
* Wait for for completion before return from 'execbuffer2'. This makes
buffer execution synchronous.
* Because the Iris driver manages the virtual address space of the GPU
and creates one GEM context for each batch buffer we have to map/unmap
all buffer objects before and after batch buffer execution.
issue #4260
The lx_emul_virt_to_pages implementation initialized the page ref
counter only for the first page, leaving the remaining elements in
uninitialized state. This, in turn, rendered the Linux page_pool (as
used by the emac network driver) ineffective, ultimately leading the a
memory leak. The fix changes the call of 'init_page_count' to take the
loop variable as argument.
Issue #4225
Increased number of trace subjects since the test sporadically fails on
some platforms.
Also added a sanity check to print an error message in case we run into
the same issue again.
Fixesgenodelabs/genode#4261