Moreover, be strict when calculating the page-table requirements of
core, which is architecture specific, and declare the virtual memory
requirements of core architecture-wise.
Ref #1588
The ~Irq_session_component relied on the IRQ number obtained by the
corresponding kernel IRQ object to mark the IRQ as free at the IRQ
allocator. However, since the kernel IRQ object is initialized not
before the 'sigh' function is called, the IRQ of sessions that
never called 'sigh' could not be freed correctly. This patch fixes
the problem by not relying on the kernel IRQ object for obtaining
the number in the destructor but using the '_irq_number' member
variable instead.
Instead of organizing page tables within slab blocks and allocating such
blocks dynamically on demand, replace the page table allocator with a
simple, static alternative. The new page table allocator is dimensioned
at compile-time. When a PD runs out of page-tables, we simply flush its
current mappings, and re-use the freed tables. The only exception is
core/kernel that should not produce any page faults. Thereby it has to
be ensured that core has enough page tables to populate it's virtual
memory.
A positive side-effect of this static approach is that the accounting
of memory used for page-tables is now possible again. In the dynamic case
there was no protocol existent that solved the problem of donating memory
to core during a page fault.
Fix#1588
This patch enable clients of core's TRACE service to obtain the
execution times of trace subjects (i.e., threads). The execution time is
delivered as part of the 'Subject_info' structure.
Right now, the feature is available solely on NOVA. On all other base
platforms, the returned execution times are 0.
Issue #813
Add a Platform::setup_irq_mode function which enables the IRQ session to
update the trigger mode and polarity of the associated IRQ according to
the session parameters. On ARM this function is a nop.
This change enables the x86_64 platform to support devices which use
arbitrary trigger modes and polarity settings, e.g. AHCI on QEMU and
real hardware.
Fixes#1528.
Because of helping, it is possible that a core thread that wants to
destroy another thread at the kernel is using the scheduling context of
the thread that shall be destroyed at this point in time. When building
without GENODE_RELEASE defined, this always triggers an assertion in the
kernel. But when building with GENODE_RELEASE defined, this might silently
lead to kernel-memory corruption. This commit eliminates the latter case.
Should be reverted as soon as the scheduler is able to remove its head.
Ref #1537
Placement new can be misleading, as we already overload the new operator
to construct objects via pointers to allocators. To prohibit any problems here,
and to use one consistent approach, we can explicitely construct the object
with the already available 'construct_at' template function.
Ref #1443
* Introduce a hw specific Address_space interface for protection
domains, which combines all memory-virtualization related functionality
* Introduce a core-specific Platform_pd object that solves all the hen-egg
problems formerly distributed in kernel and core-platform code
Ref #595
Ref #1443
The assumption that IRQs in the legacy ISA range are always
edge-triggered is wrong. For the free-for-use IRQs it depends on the
actual device which uses the specific IRQ. Therefore, treat IRQs 9, 10
and 11 as level-triggered.
Enable a platform to specify how the MMIO memory allocator is to be
initialized. On ARM the existing behavior is kept while on x86 the I/O
memory is defined as the entire address space excluding the core only
RAM regions. This aligns the hw_x86_64 I/O memory allocator
initialization with how it is done for other x86 kernels such as NOVA or
Fiasco.
Perform lazy-initialization of FPU state when it is enabled for the
first time. This assures that the FXSAVE area (including the stored
MXCSR) is always properly setup and initialized to the platform default
values.
Perform all FPU-related setup in the Cpu class' init_fpu function instead of
the general system bring-up assembly code.
Set all required control register 0 and 4 flags according to Intel SDM Vol. 3A,
sections 9.2 and 9.6 instead of only enabling FPU error reporting and OSFXSR.
In the past, when the user blocked for an IRQ signal, the last signal was
acknowledged automatically thereby unmasking the IRQ. Now, the signal session
got a dedicated RPC for acknowledging IRQs and the HW back-end of that RPC
acknowledged the IRQ signal too. This led to the situation that IRQs were
unmasked twice. However, drivers expect an interrupt to be unmasked only on
the Irq_session::ack_irq and thus IRQ unmasking was moved from
Kernel::ack_signal to a dedicated kernel call.
Fixes#1493
The thread library (thread.cc) in base-foc shared 95% of the code with
the generic implementation except myself(). Therefore, its
implementation is now separated from the other generic sources into
myself.cc, which allows base-foc to use a foc-specific primitive to
enable our base libraries in L4Linux.
Issue #1491
Physical CPU quota was previously given to a thread on construction only
by directly specifying a percentage of the quota of the according CPU
session. Now, a new thread is given a weighting that can be any value.
The physical counter-value of such a weighting depends on the weightings
of the other threads at the CPU session. Thus, the physical quota of all
threads of a CPU session must be updated when a weighting is added or
removed. This is each time the session creates or destroys a thread.
This commit also adapts the "cpu_quota" test in base-hw accordingly.
Ref #1464
This patch adds const qualifiers to the functions Allocator::consumed,
Allocator::overhead, Allocator::avail, and Range_allocator::valid_addr.
Fixes#1481
Instead of handing over object ids to the kernel, which has to find them
in object pools then, core can simply use object pointers to reference
kernel objects.
Ref #1443
Instead of having an ID allocator per object class use one global allocator for
all. Thereby artificial limitations for the different object types are
superfluent. Moreover, replace the base-hw specific id allocator implementation
with the generic Bit_allocator, which is also memory saving.
Ref #1443
The verb "bin" in the context of destroying kernel objects seems pretty
unusual in contrast to "delete". When reading "bin" in the context of
systems software an association to something like "binary" is more likely.
Ref #1443
* Instead of using local capabilities within core's context area implementation
for stack allocation/attachment, simply do both operations while stack gets
attached, thereby getting rid of the local capabilities in generic code
* In base-hw the UTCB of core's main thread gets mapped directly instead of
constructing a dataspace component out of it and hand over its local
capability
* Remove local capability implementation from all platforms except Linux
Ref #1443
The global capability ID counter is not used by NOVA and Fiasco.OC
and in the future not needed by base-hw too. Thereby, remove the static
counter variable from the generic code base and add it where appropriated.
Ref #1443
Enable platform specific allocations and ram quota accounting for
protection domains. Needed to allocate object identity references
in the base-hw kernel when delegating capabilities via IPC.
Moreover, it can be used to account translation table entries in the
future.
Ref #1443
There are lots of places where a numeric argument of an argument string
gets extraced as signed long value and then assigned to an unsigned long
variable. If the value in the string was negative, it would not be
detected as invalid (and replaced by the default value), but become a
positive bogus value.
With this patch, numeric values which are supposed to be unsigned get
extracted with the 'ulong_value()' function, which returns the default
value for negative numbers.
Fixes#1472
There were two bugs. First, the caller of Kernel::await_signal wasn't
re-activated for scheduling. Second, the caller did not memorize that he
doesn't wait on a receiver anymore which had bad side effects on further
signal handling.
Fix#1459
The port uses the Cortex-A9 private timer for the kernel and an EPIT as
user timer. It was successfully tested on the Wandboard Quad and the CuBox-i
with the signal test. It lacks L2-cache and Trustzone support by now.
Thanks to Praveen Srinivas (IIT Madras, India) and Nikolay Golikov (Ksys Labs
LLC, Russia). This work is partially based on their contributions.
Fix#1467
Do not mask edge-triggered interrupts to avoid losing them while masked,
see Intel 82093AA I/O Advanced Programmable Interrupt Controller
(IOAPIC) specification, section 3.4.2, "Interrupt Mask":
"When this bit is 1, the interrupt signal is masked. Edge-sensitive
interrupts signaled on a masked interrupt pin are ignored (i.e., not
delivered or held pending)"
Or to quote Linus Torvalds on the subject:
"Now, edge-triggered interrupts are a _lot_ harder to mask, because the
Intel APIC is an unbelievable piece of sh*t, and has the edge-detect
logic _before_ the mask logic, so if a edge happens _while_ the device
is masked, you'll never ever see the edge ever again (unmasking will not
cause a new edge, so you simply lost the interrupt)."
So when you "mask" an edge-triggered IRQ, you can't really mask it at
all, because if you did that, you'd lose it forever if the IRQ comes in
while you masked it. Instead, we're supposed to leave it active, and set
a flag, and IF the IRQ comes in, we just remember it, and mask it at
that point instead, and then on unmasking, we have to replay it by
sending a self-IPI." [1]
[1] - http://yarchive.net/comp/linux/edge_triggered_interrupts.html
Ref #1448
In order to match the I/O APIC configuration, a request for user timer
IRQ 0 is remapped to vector 50 (Board::TIMER_VECTOR_USER), all other
requests are transposed by adding the vector offset 48
(Board::VECTOR_REMAP_BASE).
* Enable the use of the FXSAVE and FXRSTOR instructions, see Intel SDM
Vol. 3C, section 2.5.
* The state of the x87 floating point unit (FPU) is loaded and saved on
demand.
* Make the cr0 control register accessible in the Cpu class. This is in
preparation of the upcoming FPU management.
* Access to the FPU is disabled by setting the Task Switch flag in the cr0
register.
* Access to the FPU is enabled by clearing the Task Switch flag in the cr0
register.
* Implement FPU initialization
* Add is_fpu_enabled helper function
* Add pointer to CPU lazy state to CPU class
* Init FPU when finishing kernel initialization
* Add function to retry FPU instruction:
Similar to the ARM mechanism to retry undefined instructions, implement a
function for retrying an FPU instruction. If a floating-point instruction
causes an #NM exception due to the FPU being disabled, it can be retried
after the correct FPU state is restored, saving the current state and
enabling the FPU in the process.
* Disable FPU when switching to different user context:
This enables lazy save/restore of the FPU since trying to execute a
floating point instruction when the FPU is disabled will cause a #NM
exception.
* Declare constant for #NM exception
* Retry FPU instruction on #NM exception
* Assure alignment of FXSAVE area:
The FXSAVE area is 512-byte memory region that must be 16-byte aligned. As
it turns out the alignment attribute is not honored in all cases so add a
workaround to assure the alignment constraint is met by manually rounding
the start of the FXSAVE area to the next 16-byte boundary if necessary.
The LAPIC timer is programmed in one-shot mode with vector 32
(Board::TIMER_VECTOR_KERNEL). The timer frequency is measured using PIT
channel 2 as reference (50ms delay).
Disable PIT timer channel 0 since BIOS programs it to fire periodically.
This avoids potential spurious timer interrupts.
The implementation initializes the Local APIC (LAPIC) of CPU 0 in xapic
mode (mmio register access) and uses the I/O APIC to remap, mask and
unmask hardware IRQs. The remapping offset of IRQs is 48.
Also initialize the legacy PIC and mask all interrupts in order to
disable it.
For more information about LAPIC and I/O APIC see Intel SDM Vol. 3A,
chapter 10 and the Intel 82093AA I/O Advanced Programmable Interrupt
Controller (IOAPIC) specification
Set bit 9 in the RFLAGS register of user CPU context to enable
interrupts on kernel- to usermode switch.
Make the local APIC accessible via its MMIO region by adding a 2 MB
large page mapping at 0xfee00000 with memory type UC.
Note: The mapping is added to the initial page tables to make the APIC
usable prior to the activation of core's page tables, e.g. in the
constructor of the timer class.
The location in memory is arbitrary but we use the same address as the
ARM architecture. Adjust references to virtual addresses in the mode
transition pages to cope with 64-bit values.
The interrupt stack must reside in the mtc region in order to use it for
non-core threads. The size of the stack is set to 56 bytes in order to
hold the interrupt stack frame plus the additional vector number that is
pushed onto the stack by the ISR.
Call the _virt_mtc_addr function with the _mt_isrs label to calculate
the ISR base address in Idt::setup. Again, assume the address to be
below 0x10000.
Use parameter instead of class member variable because it would get
stored into the mtc region otherwise. In a further iteration only the
actual IDT should be saved into the mtc, not the complete class
instance. Currently the class instance size is equal to the IDT table
size.
The class provides the load() function which reloads the GDTR with the
GDT address in the mtc region. This is needed to make the segments
accessible to non-core threads.
Make the _gdt_start label global to use it in the call to
_virt_mtc_addr().
Use the _mt_tss label and the placement new operator to create the
Tss class instance in the mtc region. Update the hard-coded
TSS base address to use the virtual mtc address.
On exception, the CPU first checks the IDT in order to find the
associated ISR. The IDT must therefore be placed in the mode transition
pages to make them available for non-core threads.
The limit is set to match the TSS size - 1 and the base address is
hardcoded to the *current* address of the TSS instance (0x3a1100).
TODO: Set the base address using the 'tss' label. If the TSS descriptor
format were not so utterly unusable this would be straightforward.
Changes to the code that indirectly lead to a different location
of the tss result in #GP since the base address will be invalid.
The class Genode::Tss represents a 64-bit Task State Segment (TSS) as
specified by Intel SDM Vol. 3A, section 7.7.
The setup function sets the stack pointers for privilege levels 0-2 to
the kernel stack address. The load function loads the TSS segment
selector into the task register.
Implement user argument setter and getter support functions. The mapping of
the state registers corresponds to the system call parameter passing
convention.
The instruction pointer is the first field of the master context and can
directly be used as a jump argument, which avoids additional register
copy operations.
Point stack to client context region and save registers using push
instructions.
Note that since the push instruction first increments the stack pointer
and then stores the value on the stack, the RSP has to point one field
past RBP before pushing the first register value.
As the kernel entry is called from the interrupt handler the stack
layout is as specified by Intel SDM Vol. 3A, figure 6-8. An additional
vector number is stored at the top of the stack.
Gather the necessary client information from the interrupt stack frame
and store it in the client context.
The new errcode field is used to store the error code that some
interrupts provide (e.g. #PF). Rework mode transition reserved space and
offset constants to match the new CPU_state layout.
The macros are used to assign syscall arguments to specific registers.
Using the AMD64 parameter passing convention avoids additional copying of
variables since the C++ function parameters are already in the right
registers.
The interrupt return instruction in IA-32e mode applies the prepared
interrupt stack frame to set the RFLAGS, CS and SS segment as well as
the RIP and RSP registers. It then continues execution of the user code.
For detailed information refer to Intel SDM Vol. 3A, section 6.14.3.
After activating the client page tables the client context cannot be
accessed any longer. The mode transition buffer however is globally
mapped and can be used to restore the remaining register values.
Set the stack pointer to the R8 field in the client context to enable
restoring registers by popping values of the stack.
After this step the only remaining registers that do not contain client
values are RAX, RSP and RIP.
Note that the client value of RAX is pop'd to the global buffer region as
the register will still be used by subsequent steps. It will be restored to
the value in the buffer area just prior to resuming client code execution.
Set I/O privilege level to 3 to allow core to perform port I/O from
userspace. Also make sure the IF flag is cleared for now until interrupt
handling is implemented.
Setup an IA-32e interrupt stack frame in the mode transition buffer region.
It will be used to perform the mode switch to userspace using the iret
instruction.
For detailed information about the IA-32e interrupt stack frame refer to
Intel SDM Vol. 3A, figure 6-8.
The constants specify offset values of CPU context member variables as
specified by Genode::Cpu_state [1] and Genode::Cpu::Context [2].
[1] - repos/base/include/x86_64/cpu/cpu_state.h
[2] - repos/base-hw/src/core/include/spec/x86/cpu.h
The new entries specify a 64-bit code segment with DPL 3 at index 3 and a
64-bit data segment with DPL 3 at index 4.
These segments are needed for transitioning to user mode.
A pointer to the client context is placed in the mt_client_context_ptr area.
It is used to pass the current client context to the lowlevel mode-switching
assembly code.
IA-32e paging translates 48-bit linear addresses to 52-bit physical
addresses. Translation structures are hierarchical and four levels deep.
The current implementation supports regular 4KB and 1 GB and 2 MB large
page mappings.
Memory typing is not yet implemented since the encoded type bits depend
on the active page attribute table (PAT)*.
For detailed information refer to Intel SDM Vol. 3A, section 4.5.
* The default PAT after power up does not allow the encoding of the
write-combining memory type, see Intel SDM Vol. 3A, section 11.12.4.
* Add common IA-32e paging descriptor type:
The type represents a table entry and encompasses all fields shared by
paging structure entries of all four levels (PML4, PDPT, PD and PT).
* Simplify PT entry type by using common descriptor:
Differing fields are the physical address, the global flag and the memory
type flags.
* Simplify directory entry type by using common descriptor:
Page directory entries (PDPT and PD) have an additional 'page size' field
that specifies if the entry references a next level paging structure or
represents a large page mapping.
* Simplify PML4 entry type by using common descriptor
Top-level paging structure entries (PML4) do not have a 'pat' flag and the
memory type is specified by the 'pwt' and 'pcd' fields only.
* Implement access right merging for directory paging entries
The access rights for translations are determined by the U/S, R/W and XD
flags. Paging structure entries that reference other tables must provide
the superset of rights required for all entries of the referenced table.
Thus merge access rights of new mappings into existing directory entries to
grant additional rights if needed.
* Add cr3 register definition:
The control register 3 is used to set the current page-directory base
register.
* Add cr3 variable to x86_64 Cpu Context
The variable designates the address of the top-level paging structure.
* Return current cr3 value as translation table base
* Set context cr3 value on translation table assignment
* Implement switch to virtual mode in kernel
Activate translation table in init_virt_kernel function by updating the
cr3 register.
* Ignore accessed and dirty flags when comparing existing table entries
These flags can be set by the MMU and must be disregarded.
* Add isr.s assembler file:
The file declares an array of Interrupt Service Routines (ISR) to handle
the exception vectors from 0 to 19, see Intel SDM Vol. 3A, section
6.3.1.
* Add Idt class:
* The class Genode::Idt represents an Interrupt Descriptor Table as
specified by Intel SDM Vol. 3A, section 6.10.
* The setup function initializes the IDT with 20 entries using the ISR
array defined in the isr.s assembly file.
* Setup and load IDT in Genode::Cpu ctor:
The Idt::setup function is only executed once on the BSP.
* Declare ISRs for interrupts 20-255
* Set IDT size to 256
This patch contains the initial code needed to build and bootstrap the
base-hw kernel on x86 64-bit platforms. It gets stuck earlier
because the binary contains 64-bit instructions, but it is started in
32-bit mode. The initial setup of page tables and switch to long mode is
still missing from the crt0 code.
To ease debugging without the need to tweak the kernel every time, and to
support userland developers with useful information this commit extends several
warnings and errors printed by the kernel/core by which thread/application
caused the problem, and what exactly failed.
Fix#1382Fix#1406
In the past, unmap sometimes occured on RM clients that have no thread,
PD, or translation table assigned. However, this shouldn't be the
case anymore.
Fixes#504
* Introduce hw-specific crt0 for core that calls e.g.: init_main_thread
* re-map core's main thread UTCB to fit the right context area location
* switch core's main thread's stack to fit the right context area location
Fix#1440
This decouples the size of the mode transition control region from the
minimal mapping size of the page tables implementation. Rather, the CPU
architecture is able to specify the actual size.
Rationale: For x86_64, we need the mtc region to span two pages in order
to store all the tables required to perform the mode switch.
For the USB-Armory, we use a newer version of Linux (3.18) as for the
i.MX53-QSB. The main difference is, that the newer Linux uses a DTB instead of
ATAGs.
Fixes#1422
The USB Armory is almost the same as the i.MX53-QSB but it uses only
one of the two RAM banks available in i.MX53. Furthermore we use the USB
Armory only with Trustzone enabled.
Ref #1422
* enables world-switch using ARM virtualization extensions
* split TrustZone and virtualization extensions hardly from platforms,
where it is not used
* extend 'Vm_session' interface to enable configuration of guest-physical memory
* introduce VM destruction syscall
* add virtual machine monitor for hw_arndale that emulates a simplified version
of ARM's Versatile Express Cortex A15 board for a Linux guest OS
Fixes#1405
To enable support of hardware virtualization for ARM on the Arndale board,
the cpu needs to be prepared to enter the non-secure mode, as long as it does
not already run in it. Therefore, especially the interrupt controller and
some TrustZone specific system registers need to be prepared. Moreover,
the exception vector for the hypervisor needs to be set up properly, before
booting normally in the supervisor mode of the non-secure world.
Ref #1405
The generalization of interrupt objects in the kernel and the use of
C++ polymorphism instead of explicitely checking for special interrupts
within generic code (Cpu_job::_interrupt) enables the registration of
additional interrupts used by the kernel, which are needed for specific
aspects added to the kernel, like ARM hardware virtualization interrupts.
* Introduce generic base class for interrupt objects handled by the kernel
* Derive an interrupt class for those handled by the user-land
* Implement IPI-specific interrupt class
* Implement timer interrupts using the new generic base class
Ref #1405
Until now, one distinct software generated IRQ per cpu was used to
send signals between cpus. As ARM's GIC has 16 software generated
IRQs only, and they need to be partitioned between secure/non-secure
TrustZone world as well as virtual and non-virtual worlds, we should
save them.
Ref #1405
* name irq controller memory mapped I/O regions consistently
in board descriptions
* move irq controller and timer memory mapped I/O region descriptions
from cpu class to board class
* eliminate artificial distinction between flavors of ARM's GIC
* factor cpu local initialization out of ARM's GIC interface description,
which is needed if the GIC is initialized differently e.g. for TrustZone
Ref #1405
Setting the ACTLR.SMP bit also without SMP support fastens RAM access
significantly. A proper solution would implement SMP support which must enable
the bit anyway.
Fixes#1353
When building Genode for VEA9X4 as micro-hypervisor protected by the ARM
TrustZone hardware we ran into limitations regarding our basic daily
testing routines. The most significant is that, when speaking about RAM
partitioning, the only available options are to configure the whole SRAM
to be secure and the whole DDR-RAM to be non-secure or vice versa. The
SRAM however provides only 32 MB which isn't enough for both a
representative non-secure guest OS or a secure Genode that is still
capable of passing our basic tests. This initiated our decision to
remove the VEA9X4 TrustZone-support.
Fixes#1351
On VEA9X4-TZ, the context-area overlaps with the virtual area of the
text, data and bss. However, we can't simply change the link address as
the core image (used physically respectively 1:1 mapped) needs to be in
this particular RAM-region as it is the only one that can be protected
against a VM. Thus I've moved the context area to a place where it
shouldn't disturb any HW-platform.
Fixes#1337
Declaring the SP804 0/1 module and its interrupt to be non-secure prevents the
secure Genode from receiving the interrupt and hence the timer driver in the
secure Genode doesn't work.
Fixes#1340
This fix configures TTBRs and translation-table descriptors as if we would use
SMP although we don't to circumvent problems with UP-configurations.
This fix should be superseded later by full SMP support for the VEA9X4.
ref #1312
The HW-kernel, in contrast to other kernels, provides a direct reference
to the pager object with the fault signal that is send to the pager
activation. When accessing this reference directly we may fall into the
time span where the root parent-entrypoint of the faulter has alredy
dissolved the pager object from the pager entrypoint, but not yet
silenced the according signal context. To avoid this we issue an
additional 'lookup_and_lock' with the received pager object. This isn't
optimal as we don't need the potentially cost-intensive lookup but only the
synchronization.
Fixes#1311.
Fixes#1332.
On base-hw, each thread owns exactly one scheduling context for its
whole lifetime. However, introducing helping on IPC, a thread might get
executed on scheduling contexts that it doesn't own. Figuratively
spoken, the IPC-helping relation spans trees between threads. These
trees are identical to those of the IPC relation between threads. The
root of such a tree is executed on all scheduling contexts in the tree.
All other threads in the tree are not executed on any scheduling context
as long as they remain in this position. Consequently, the ready-state
of all scheduling contexts in an IPC-helping tree always equals the
state of the root context.
fix#1102
As soon as helping is used, a thread may also be in a blocking state when its
scheduling context is ready. Hence, the state designation SCHEDULED for an active
thread would be pretty misleading.
ref #1102
On the Versatile Express Cortex A9x4 platform the first memory region
0x0 - 0x4000000 is a hardware remapped memory area, containing flash
and DDR RAM copies and thus should not be added in addition to all
DDR RAM regions and the SRAM region.
In the init configuration one can configure the donation of CPU time via
'resource' tags that have the attribute 'name' set to "CPU" and the
attribute 'quantum' set to the percentage of CPU quota that init shall
donate. The pattern is the same as when donating RAM quota.
! <start name="test">
! <resource name="CPU" quantum="75"/>
! </start>
This would cause init to try donating 75% of its CPU quota to the child
"test". Init and core do not preserve CPU quota for their own
requirements by default as it is done with RAM quota.
The CPU quota that a process owns can be applied through the thread
constructor. The constructor has been enhanced by an argument that
indicates the percentage of the programs CPU quota that shall be granted
to the new thread. So 'Thread(33, "test")' would cause the backing CPU
session to try to grant 33% of the programs CPU quota to the thread
"test". By now, the CPU quota of a thread can't be altered after
construction. Constructing a thread with CPU quota 0 doesn't mean the
thread gets never scheduled but that the thread has no guaranty to receive
CPU time. Such threads have to live with excess CPU time.
Threads that already existed in the official repositories of Genode were
adapted in the way that they receive a quota of 0.
This commit also provides a run test 'cpu_quota' in base-hw (the only
kernel that applies the CPU-quota scheme currently). The test basically
runs three threads with different physical CPU quota. The threads simply
count for 30 seconds each and the test then checks wether the counter
values relate to the CPU-quota distribution.
fix#1275
On Arndale, the kernel timer resets to the initial value of the last
count-down and continues as soon as it reaches zero. We must check this
via the interrupt status when we read out the timer value and in case
return 0 instead of the real value.
fix#1299
Kernel::Processor was a confusing remnant from the old scheme where we had a
Processor_driver (now Genode::Cpu) and a Processor (now Kernel::Cpu).
This commit also updates the in-code documentation and the variable and
function naming accordingly.
fix#1274
The run test 'hw_info' prints the content of the basic ARMv7 identification and
feature registers in a pretty readable format. It is a kernel-internal test
because many of these registers are restricted to privilege level 1 or higher.
fix#1278
The new scheduler serves the orthogonal requirements of both
high-throughput-oriented scheduling contexts (shortly called fill in the
scheduler) and low-latency-oriented scheduling contexts (shortly called
claim in the scheduler). Thus it knows two scheduling modes. Every claim
owns a CPU-time-quota expressed as percentage of a super period
(currently 1 second) and a priority that is absolute as long as the
claim has quota left for the current super period. At the end of a super
period the quota of all claims gets refreshed. During a super period,
the claim mode is dominant as long as any active claim has quota left.
Every time this isn't the case, the scheduler switches to scheduling of
fills. Fills are scheduled in a simple round robin with identical time
slices. Order and time-slices of the fill scheduling are not affected by
the super period. Now on thread creation, two arguments, priority and
quota are needed. If quota is 0, the new thread participates in CPU
scheduling with a fill only. Otherwise he participates with both a
claim and a fill. This concept dovetails nicely with Genodes quota based
resource management as any process can grant subsets of its own
CPU-time and priorities to its child without knowing the global means of
CPU-time and priority.
The commit also adds a run script that enables an automated unit test of the
scheduler implementation.
fix#1225
To serve the needs of the coming CPU scheduler, the double list needs
additional methods such as 'to_tail' and 'insert_head'.
The commit also adds a run script that enables an automated unit test
of the list implementation.
ref #1225
Kernel tests are done by replacing the implementation of an otherwise
empty function 'Kernel::test' that gets called once at the primary CPU
as soon as all kernel initialization is done. To achieve this, the test
binary that implements 'Kernel::test' must be linked against the core
lib and must then replace the core binary when composing the boot image.
The latter can be done conveniently in a run script by setting the new
argument 'core_type' of the function 'build_boot_image' to the falue
'test'. If no kernel test is needed the argument does not have to be
given - it is set to 'core' by default which results in a "normal"
Genode image.
ref #1225
Previously, Idle_thread inherited from Thread which caused an extra
processor_pool.h and processor_pool.cc and also made class models for
processor and scheduling more complex. However, this inheritance makes
not much sense anyway as an idle context doesn't trigger most of the code
in Thread.
ref #1225
The memory barrier prevents the compiler from changing the program order
of memory accesses in such a way that accesses to the guarded resource
get outside the guarded stage. As cmpxchg() defines the start of the
guarded stage it also represents an effective memory barrier.
On x86, the architecture ensures to not reorder writes with older reads,
writes to memory with other writes (except in cases that are not
relevant for our locks), or read/write instructions with I/O
instructions, locked instructions, and serializing instructions.
However on ARM, the architectural memory model allows not only that
memory accesses take local effect in another order as their program
order but also that different observers (components that can access
memory like data-busses, TLBs and branch predictors) observe these
effects each in another order. Thus, a correct program order isn't
sufficient for a correct observation order. An additional architectural
preservation of the memory barrier is needed to achieve this.
Fixes#692
Invalidating all branch predictors before switching the PD
fixes instability problems on Panda and has not much effect
on the performance of other boards. However, we neither know why
this is a fix nor wether it fixes the real cause of the problem.
fix#1294
Previously, the timer was used to remember the state of the time slices.
This was sufficient before priorities entered the scene as a thread always
received a fresh time slice when he was scheduled away. However, with
priorities this isn't always the case. A thread can be preempted by another
thread due to a higher priority. In this case the low-priority thread must
remember how much time he has consumed from its current time slice because
the timer gets re-programmed. Otherwise, if we have high-priority threads
that block and unblock with high frequency, the head of the next lower
priority would start with a fresh time slice all the time and is never
superseded.
fix#1287
* When flushing the data and unified cache on ARM, clean and invalidate
instead of just cleaning the corresponding cache lines
* After zero-ing a freshly constructed dataspace in core, invalidate
corresponding cache lines from the instruction cache
After modifying mode transition for branch prediction tz_vmm wasn't
working anymore on hw_imx53_tz but the modifications had nothing to do
with the VM code. However, the amount of instructions in the MT before the
VM exception-vector changed. So I tried stuffing the last working version with
NOPs and found that tz_vmm worked for some NOP amounts and for others not.
Thus, I increased the alignment of the VM exception-vector from 16 bytes to 32
bytes, é voila, its working with any amount of NOPs as well as with branch
prediction commits.
ref #474
Previously, we did the protection-domain switches without a transitional
translation table that contains only global mappings. This was fine as long
as the CPU did no speculative memory accesses. However, to enabling branch
prediction triggers such accesses. Thus, if we don't want to invalidate
predictors on every context switch, we need to switch more carefully.
ref #474
When a page fault cannot be resolved, the GDB monitor can get a hint about
which thread faulted by evaluating the thread state object returned by
'Cpu_session::state()'. Unfortunately, with the current implementation,
the signal which informs GDB monitor about the page fault is sent before
the thread state object of the faulted thread has been updated, so it
can happen that the faulted thread cannot be determined immediately
after receiving the signal.
With this commit, the thread state gets updated before the signal is sent.
At least on base-nova it can also happen that the thread state is not
accessible yet after receiving the page fault notification. For this
reason, GDB monitor needs to retry its query until the state is
accessible.
Fixes#1206.
The build config for core is now provided through libraries to enable
implicit config composition through specifiers and thereby avoid
consideration of inappropriate targets.
fix#1199
A subject that inherits from Processor_client not necessarily has the need for
doing a processor-global TLB flush (e.g. VMs). At the other hand the Thread
class (as representation of the only source of TLB flushes) is already one of
the largest classes in base-hw because it provides all the syscall backends
and should therefore not accumulate other aspects without a functional reason.
Hence, I decided to move the aspect of synchronizing a TLB flush over all
processors to a dedicated class named Processor_domain_update.
Additionally a singleton of Processor_domain_update_list is used to enable
each processor to see all update-domain requests that are currently pending.
fix#1174
Commit 6a3368ee that refactored the mode transition assembler path, and
high-level entry point, fundamentally broke that part for the TrustZone VMs.
Instead of jumping to the appropriated address, the instruction value at that
point where used as target address.
Moreover, the TrustZone part of the mode transition page was not included into
the boundary check.
Ref #1182
On ARM it's relevant to not only distinguish between ordinary cached memory
and write-combined one, but also having non-cached memory too. To insert the
appropriated page table entries e.g.: in the base-hw kernel, we need to preserve
the information about the kind of memory from allocation until the pager
resolves a page fault. Therefore, this commit introduces a new Cache_attribute
type, and replaces the write_combined boolean with the new type where necessary.
Don't define assembler constants inside macros, thereby calling the
corresponding macros isn't needed anymore. To prevent having to much
constants included in files where they aren't needed, split macros.s
file into a generic mode_transition.s part, and globally used macros.s.
Fix#1180
Previously this was not done before Thread_base::start(..) in
base-hw as it was not needed to have a valid cap that early. However,
when changing the affinity of a thread we need the cap to be valid
before Thread_base::start(..).
fix#1151
By now the scheduling timer was only refreshed for a new scheduling timeout
when the choosen scheduling context has changed. But we want it to be refreshed
also when the scheduled context yields without an effect to the schedulers
choice (this is the case e.g. when the idle thread gets a scheduling timeout
or a thread yields without any competitor in its priority band).
ref #1151
Fixes an alignment problem introduced by commit "hw: map core on demand"
where physical address alignment wasn't checked anymore, when inserting
a section within the first-level table of ARM's short translation table
format.
Many thanks to Christian Prochaska for helping to debug the problem.
On ARM, when machine instructions get written into the data cache
(for example by a JIT compiler), one needs to make sure that the
instructions get written out to memory and read from memory into
the instruction cache before they get executed. This functionality
is usually provided by a kernel syscall and this patch adds a generic
interface for Genode applications to use it.
Fixes#1153.
This patch changes the top-level directory layout as a preparatory
step for improving the tools for managing 3rd-party source codes.
The rationale is described in the issue referenced below.
Issue #1082