Until now the print procedure call in the Log_session_component
did contain a hardcoded 'init' string. Adding a method to filter
the session args to prefix the label and labelling the initial
Cpu_connection for the init-process enabled us to remove the
hardcoded string.
Fixes#789.
The given number of bytes is consumed but not actually allocated. This
feature may be used for accounting and use memory within core which is
in fact provided by a session client.
Fixes#792.
* Always download md5sum file and check downloaded binaries
* Extend sleep time in l4linux_netperf script, so that DHCP has enough time
* Fix regexp rule in l4linux_netperf run script to work with MAC addresses
containing only numbers
Fix#787
Sometimes the ports are not freed up quick enough by the host system after the
first test finished. The port restriction is mainly required for qemu, so don't
use it for bare metal hardware tests.
* Add platform_drv for usb_drv, sd_card_drv and ahci
* Add ahci driver and part_blk on top of it
* add linux instance using SATA partition
* add VIM using SATA partition
The window scale option (http://tools.ietf.org/html/rfc1323) patch of lwIP
definitely works solely for the receive window, not for the send window.
Setting the send window size to the maximum of an 16bit value, 65535,
or multiple of it (x * 65536 - 1) results in the same performance.
Everything else decrease performance.
We will have to check this window scale patch before using higher values.
Minor speed improvements of ~6Mbit. Additionally a ethernet frame fits now
into one memory allocation per pbuf. Beforehand two were allocated - one being
1514 bytes and another one being 2 bytes (monitored by instrumenting copy loop
in libports/src/lib/lwip/platform/nic.cc).
lwip reports via getsockopt the size of the default size of the receive buffer
to the netperf server. lwip returns 2GB and netperf server uses this value to
allocate some buffers - which of course fails with out of memory.
Reduces the "default size" to some smaller value.
With the commit we are not forced anymore to (but still can) use specific
netperf client options regarding memory allocations of the receive buffer.
MAERTS is STREAM backwards and effectively lets the netserver sends the packets
to the netperf client. So, TCP_STREAM measure the receive performance of the
lwIP stack on Genode and TCP_MAERTS the send performance of the lwIP stack
on Genode.
Normally this bench has read all data to one large buffer and
than written it back to the drive but for SATA 3 (6 Gbps) benchmarks
we would need a buffer of approximately 1.2 GB to do it this way
and reach 2 seconds bench time. Thus we use a buffer of SATA request size
and override it with every request.
This commit splits the Fiasco.OC-specific extension for the cli_monitor
into one for the Arndale platform, and one for all others. On Arndale
we add the cpu_frequency command beside the ones defined on all platforms.
Initialize and limit port speed to 3 Gbps in general because the Seagate
Barracuda 1TB throws much errors with 6 Gbps by now.
Try all port speeds from the highest to the lowest as long as debouncing fails
and try them all again in this order when falling back to slower debouncing.
Try to recover from all types of interface error.
When a port was recovered from an error during a NCQ command
get the last LBA that was accessed successfully and continue command from
this point.
Use a platform driver through the 'Regulator' service to do CMU and PMU config.
Switch off verbosity by default.
To raise expressiveness of the benchmark it dynamically adjusts the
transfer amount at any test to get a result that was measured
over a transfer time of 2000 ms at least and 2300 ms at a max.
* Retry debouncing first with a higher trial time and if this also doesn't
work with lower link speed additionaly.
* Ignore DevSlp feature because it isn't needed anyway as far as i can see.
* Relax some restrictions according the feedback of the drive as far as it
seem to have no effect in Linux too
Fix#753