If the attribute 'interface' is not set in a 'domain' tag, the router tries to
dynamically receive and maintain an IP configuration for that domain by using
DHCP in the client role at all interfaces that connect to the domain. In the
DHCP discover phase, the router simply chooses the first DHCP offer that
arrives. So, no comparison of different DHCP offers is done. In the DHCP
request phase, the server is expected to provide an IP address, a gateway, a
subnet mask, and an IP lease time to the router. If anything substantial goes
wrong during a DHCP exchange, the router discards the outcome of the exchange
and goes back to the DHCP discover phase. At any time where there is no valid
IP configuration present at a domain, the domain does only act as DHCP client
and all other router functionality is disabled for the domain. A domain cannot
act as DHCP client and DHCP server at once. So, a 'domain' tag must either
have an 'interface' attribute or must not contain a 'dhcp-server' tag.
Ref #2534
An IPv4 config (for a domain/interface of the router) consists of
an IPv4 address, a subnet prefix specifier, an optional gateway
IPv4 address, and some flags that declare whether these fields and
the config as a whole are valid. To make the handling of those
tightly connected values easier and less error prone, we encapsulate
them in a new class.
Ref #2534
Under certain circumstances we don't want inits state report to become too
outdated even if there is no change to its config or the sessions of its
children. This is the case if init is requested to provide a capability or RAM
info of it's children via its state report. Now, init automatically updates
the state report with each 1000 ms if the attribute 'child_caps' or
'child_ram' is positively set in the 'report' tag.
The tools.conf in base-fiasco/etc was obsolete and must therefore not
packaged.
This commit removes sporadic recipe hash changes due to missing or
existent etc/tools.conf.
Timing itself costs time. Thus, the stressfull timeout phase of the
test is not exactly as long as set but a little bit longer. This is why the
fast timeouts are able to trigger more often than they are expected to
(the timer has a static timeout-rate limit). Normally we consider this effect
through an error tolerance of 10%. But at least on foc x86_32 (PIT with very
low max timeout), timing is so expensive that 10% is not enough. We have to
raise it to 11%.
This patch propages the 'Service_denied' condition of forwarded sessions
to the parent. Without it, the invalid session request stays pending
infinitely, which leads to the problem described in issue #2542. It
turns out that suggested solution given in the issue text is actually
not needed when applying this fix.
Fixes#2542
The ROM filter did not handle the situation where the generated content
exceeds the size of the initially allocated dataspace for the target
buffer. This patch wraps the XML generation in a retry loop that
expands the buffer as needed.
This patch makes the specification of screen coordinates more flexible.
First, the 'origin' attribute allows one to refer to either of the four
screen corners without knowing the screen size. Second, the 'width'
and 'height' values now accept negative values, which are relative to
the screen size.
The 'File_system::Connection' already performs an on-demand session
upgrade should the server report an 'Out_of_caps' or 'Out_of_ram'
condition. So file-system clients are normally relieved from handling
those exceptions. However, the upgrade was limited to two attempts per
operation (which amounts to 16 KiB). When using the Rump VFS plugin in
the VFS server, this amount does not always suffice. So the exception is
reflected to the client. I observed this problem as a message "unhandled
error" printed by fs_rom. This patch removes the upgrade limit such that
a greedy file-system server becomes iteratively upgraded until it stops
arguing or the client's RAM is exhausted.
* Instead of always re-load page-tables when a thread context is switched
only do this when another user PD's thread is the next target,
core-threads are always executed within the last PD's page-table set
* remove the concept of the mode transition
* instead map the exception vector once in bootstrap code into kernel's
memory segment
* when a new page directory is constructed for a user PD, copy over the
top-level kernel segment entries on RISCV and X86, on ARM we use a designated
page directory register for the kernel segment
* transfer the current CPU id from bootstrap to core/kernel in a register
to ease first stack address calculation
* align cpu context member of threads and vms, because of x86 constraints
regarding the stack-pointer loading
* introduce Align_at template for members with alignment constraints
* let the x86 hardware do part of the context saving in ISS, by passing
the thread context into the TSS before leaving to user-land
* use one exception vector for all ARM platforms including Arm_v6
Fix#2091
* introduce new syscall (core-only) to create privileged threads
* take the privilege level of the thread into account
when doing a context switch
* map kernel segment as accessable for privileged code only
Ref #2091
Always switch to the "exception stack" instead of having a hardware initiated
stack switch during exceptions/interrupts when the privilege level changes only.
Moreover, this commit increases the exception stack slightly.
Ref #2091
* introduces central memory map for core/kernel
* on 32-bit platforms the kernel/core starts at 0x80000000
* on 64-bit platforms the kernel/core starts at 0xffffffc000000000
* mark kernel/core mappings as global ones (tagged TLB)
* move the exception vector to begin of core's binary,
thereby bootstrap knows from where to map it appropriately
* do not map boot modules into core anymore
* constrain core's virtual heap memory area
* differentiate in between user's and core's main thread's UTCB,
which now resides inside the kernel segment
Ref #2091
This was an error output-line for each affected packet previously but it
is pretty normal for the router to receive packets whose network layer
protocol it doesn't know . In the default case, these packets shall be
ignored silently.
Ref #2490
One can configure the NIC router to act as DHCP server at interfaces of a
domain by adding the <dhcp> tag to the configuration of the domain like
this:
<domain name="vbox" interface="10.0.1.1/24">
<dhcp-server ip_first="10.0.1.80"
ip_last="10.0.1.100"
ip_lease_time_sec="3600"
dns_server="10.0.0.2"/>
...
</domain>
The attributes ip_first and ip_last define the available IPv4 address
range while ip_lease_time_sec defines the lifetime of an IPv4 address
assignment in seconds. The IPv4 address range must be in the subnet
defined by the interface attribute of the domain tag and must not cover
the IPv4 address in this attribute. The dns_server attribute gives the
IPv4 address of the DNS server that might also be in another subnet.
The lifetime of an offered assignment is the configured round trip time of
the router while the ip_lease_time_sec is applied only if the offer is
requested by the client in time.
The ports/run/virtualbox_nic_router.run script is an example of how to
use the new DHCP server functionality.
Ref #2490
Previously, garbage collect was only done when an incoming packet passed the
Ethernet checks. Now it is really done first when receiving a packet at an
interface.
Ref #2490
If the router has no gateway attribute for a domain (means that the router
itself is the gateway), and it gets an ARP request for a foreign IP, it shall
answer with its own IP.
Ref #2490
Do not use two times the RTT for the lifetime of links but use it as
it is configured to simplify the usage of the router. Internally, use
Microseconds/Duration type instead of plain integers.
Ref #2490
The nic_dump uses a wrapper for all supported protocols that
takes a packet and a verbosity configuration. The wrapper object can
than be used as argument for a Genode log function and prints the
packet's contents according to the given configuration. The
configuration is a distinct class to enable the reuse of one instance
for different packets.
There are currently 4 possible configurations for each protocol:
* NONE (no output for this protocol)
* SHORT (only the protocol name)
* COMPACT (the most important information densely packed)
* COMPREHENSIVE (all header information of this protocol)
Ref #2490
Provide utilities for appending new options to an existing DHCP packet
and a utility for finding existing options that returns a typed option
object. Remove old version that return untyped options.
Ref #2490
Apply the style rule that an accessor is named similar to the the underlying
value. Provide read and write accessors for each mandatory header attribute.
Fix some incorrect structure in the headers like with the flags field
in Ipv4_packet.
Ref #2490
Encapsulate the enum into a struct so that it is named
Ethernet_frame::Type::Enum, give it the correct storage type
uint16_t, and remove those values that are (AFAIK) not used by
now (genode, world).
Ref #2490
Do not stop routing if the transport layer protocol is unknown but
continue with trying IP routing instead. The latter was already
done when no transport routing could be applied but for unknown transport
protocols we caught the exception at the wrong place.
Ref #2490
No starvation of timeout signals
--------------------------------
Add several timeouts < 1ms to the stress test and check that timeout
handling doesn't become significantly unfair (starvation) in this situation
where some timeouts trigger nmuch faster than they get handled.
Rate limiting for timeout handling in timer
-------------------------------------------
Ensure that the timer does not handle timeouts again within 1000
microseconds after the last handling of timeouts. This makes denial of
service attacks harder. This commit does not limit the rate of timeout
signals handled inside the timer but it causes the timer to do it less
often. If a client continuously installs a very small timeout at the
timer it still causes a signal to be submitted to the timer each time
and some extra CPU time to be spent in the internal handling method. But
only every 1000 microseconds this internal handling causes user timeouts
to trigger.
If we would want to limit also the call of the internal handling method
to ensure that CPU time is spent beside the RPCs only every 1000
microseconds, things would get more complex. For instance, on NOVA
Time_source::schedule_timeout(0) must be called each time a new timeout
gets installed and becomes head of the scheduling queue. We cannot
simply overwrite the already running timeout with the new one.
Ref #2490
In the past, a signal context, that was chosen for handling by
'Signal_receiver::pending_signal and always triggered again before
the next call of 'pending_signal', caused all other contexts behind
in the list to starve. This was the case because 'pending_signal'
always took the first pending context in its context list.
We avoid this problem now by handling pending signals in a round-robin
fashion instead.
Ref #2532
We did not set the correct now_period previously but it wasn't conspicuous
because the bug triggered not before a full period had passed which on most
platforms is a pretty long time.
Ref #2490
Ensure that the timer does not handle timeouts again within 1000
microseconds after the last handling of timeouts. This makes denial of
service attacks harder. This commit does not limit the rate of timeout
signals handled inside the timer but it causes the timer to do it less
often. If a client continuously installs a very small timeout at the
timer it still causes a signal to be submitted to the timer each time
and some extra CPU time to be spent in the internal handling method. But
only every 1000 microseconds this internal handling causes user timeouts
to trigger.
If we would want to limit also the call of the internal handling method
to ensure that CPU time is spent beside the RPCs only every 1000
microseconds, things would get more complex. For instance, on NOVA
Time_source::schedule_timeout(0) must be called each time a new timeout
gets installed and becomes head of the scheduling queue. We cannot
simply overwrite the already running timeout with the new one.
Ref #2490
The remote shell facilities are past deprecation and there is an
obligation to prevent their use rather than to support them. This patch
removes the related function definitions from 'unistd.h', which have not
been been included in the Genode libc ABI regardless.
Fix#2530
Remove getaddrinfo and freeaddrinfo from the Libc::Plugin and get rid of
the extra libc_resolv library. Remove getaddrinfo/freeaddrinfo symbol
hiding patch for FreeBSD sources. Remove libc_resolv from Makefiles and
run scenarios.
Fix#2273
Linux del_timer() and mod_timer() return if the timer was pending before
the modification. Additionally, these functions are potentially called
from handler function of the timer to modify and, therefore, checking
for timeout != INVALID_TIMEOUT is not sufficient as the timeout is
indeed valid when the handler is executed.
This patch fixes an aliasing problem of the 'close' method signature
that prevented the Input::Root_component::close method to be called.
This way, the event-queue state was not reset at session-close time,
which prevented a subsequent session-creation request to succeed. With
the patch, input servers like ps2_drv, usb_drv that rely on the
Input::Root_component support the dynamic re-opening of sessions. This
happens in particular when using a dynamically configured input filter.
We update the alarm-scheduler time with results of
Timer::Connection::curr_time when we schedule new timeouts but when
handling the signal from the Timer server we updated the alarm-scheduler
time with the result of Timer::Connection::elapsed_us. Mixing times
like this could cause a non-monotone time value in the alarm scheduler.
The alarm scheduler then thought that the time value wrapped and
triggered all timeouts immediately. The problem was fixed by always
using Timer::Connection::curr_time as time source.
Ref #2490
This recipe copies the entire stdcxx library into the API archive, which
is an interim solution until we introduce a proper ABI for stdcxx. With
this current version, every user of the stdcxx ABI will implicitly build
the stdcxx library.
This patch merges two similar rules, which create content at 'include'
into a single rule. This prevents a possible race condition when
creating archives in parallel.
We moved the stack-area segment 128 MiB behind text and data to comply
with assumptions in the kernel ELF loader.
This commit also reenables static binaries on linux and removes the
unused stack_area.stdlib.ld script.
Fixes#2521
This is a drivers subsystem that starts the most fundamental
(framebuffer, input, block) device drivers dynamically, depending on the
runtime-detected devices. The discovered block devices are reported
as a "block_devices" report.
This patch applies the handling of cursor keys, function keys, and page
up/down keys even if no keymap is defined. This is the case when using
the terminal with character events produced by the input filter.
This patch is a workaround for the apparent problem that noux
applications, which perform execve, implicitly use functionality from
the dynamic linker, not explicitly via the libc. If the binary lacks the
dependency information, noux will fail on the execve attempt. The latter
is the case when the noux package is built as a depot archive where
library dependencies are not traversed over multiple levels.
In nested scenarios like driver_manager.run, the initial session quota
for IO_PORT, IO_PORT, and IRQ sessions is expectedly insufficient.
However, the condition is properly handled by re-attemping the request
with a slightly increased quota. Still, core prints a warning each time
the request is denied for quota reasons, which spams the log. This patch
removes the non-critical message.
Create periodic and one-shot timeouts with the maximum duration
to see if triggers any corner-case bugs. They must not trigger during
the test.
Ref #2490
If we add an absolute timeout to the back-end alarm-scheduler we must first
call 'handle' at the scheduler to update its internal time value.
Otherwise, it might happen that we add a timeout who's deadline is so big that
it normally belongs to the next time-counter period but the scheduler thinks
that it belongs to the current period as its time is older than the one used
to calculate the deadline.
Ref #2490
When we have two time values of an unsigned integer type and we create
the difference and want to know wether it is positive or negative within
the same value we loose at least one half of the value range for casting
to signed integers. This was the case in the alarm scheduler when
checking wether an alarm already triggered. Even worse, we casted from
'unsigned long' to 'signed int' which caused further loss on at least
x86_64. Thus, big timeouts like ~0UL falsely triggered directly.
Now, we use an extra boolean value to remember in which period of the
time counter we are and to which period of the time counter the deadline
of an alarm belongs. This boolean switches its value each time the time
counter wraps. This way, we can avoid any casting by checking wether the
current time is of the same period as the deadline of the alarm that we
inspect. If so, the alarm is pending if "current time >= alarm
deadline", otherwise it is pending if "current time < alarm deadline".
Ref #2490
If the PIT timer driver gets activated too slow (e.g. because of a bad priority
configuration), it might miss counter wraps and would than produce sudden time
jumps. The driver now detects this problem dynamically, warns about it and
adapts the affected values to avoid time jumps.
Ref #2400
The cache directory content is slightly different on each prepare-port
run and fortunately not used at build time. So, we just remove it at the
end of port preparation.
When using the elfweaver to generate boot images, python stores
precompiled modules in the source directory besides the .py files. This
changed the contrib source tree with binary files specific to the build
host. As a result the depot create tool picked up the changed source
tree and produced strange new hashes. Now, the tool sources are copied
to the build directory where python can do its optimizations and the
depot stays clean.
The NIC router always reports the link state "Up" (true) because
the effective link state depends on the targeted remote interface
and thus on the individual routing for each packet. Consequently,
also the signal handler for state changes gets ignored.
Ref #2490
IP stacks may treat a network interface as "down" when it states a MAC
address with the I/G bit (bit 40) set to "Group" (value 0) instead of
"Individual" (value 1). This was observed with a TinyCore 8 inside a
Virtualbox VM. Thus, the previously choosen 03:03:03:03:03:00 as base
for the MAC address allocator is bad. Now we use the 02:02:02:02:02:00
instead. This also ensures that the MAC addresses are not marked as
"Universal" but as "Local" (bit 41, value 1) which is correct in general
as the router allocates MAC addresses only for virtual networks.
Ref #2490