genode/repos/base-nova/include/cpu_session/client.h

118 lines
3.4 KiB
C
Raw Normal View History

2012-08-08 15:12:10 +00:00
/*
* \brief Client-side cpu session NOVA extension
* \author Alexander Boettcher
* \date 2012-07-27
*/
/*
2013-01-10 20:44:47 +00:00
* Copyright (C) 2012-2013 Genode Labs GmbH
2012-08-08 15:12:10 +00:00
*
* This file is part of the Genode OS framework, which is distributed
* under the terms of the GNU General Public License version 2.
*/
NOVA: extend cpu_session with synchronous pause The kernel provides a "recall" feature issued on threads to force a thread into an exception. In the exception the current state of the thread can be obtained and its execution can be halted/paused. However, the recall exception is only delivered when the next time the thread would leave the kernel. That means the delivery is asynchronous and Genode has to wait until the exception triggered. Waiting for the exception can either be done in the cpu_session service or outside the service in the protection domain of the caller. It turned out that waiting inside the cpu_service is prone to deadlock the system. The cpu_session interface is one of many session interfaces handled by the same thread inside Core. Deadlock situation: * The caller (thread_c) to pause some thread_p manages to establish the call to the cpu_session thread_s of Core but get be interrupted before issuing the actual pause (recall) command. * Now the - to be recalled thread_p - is scheduled and tries to invoke another service of Core, like making log output. * Since the Core thread_s is handling the session request of thread_c, the kernel uses the timeslice of thread_p to help to finish the request handled by thread_s. * Thread_s issues the actual pause/recall on thread_p and blocks inside Core to wait for the recall exception to be issued. * thread_p will leave not the kernel before finishing it actual IPC with thread_s which is blocked waiting for thread_p. That is the reason why the waiting/blocking for the recall exception taking place must be done on NOVA in the context of the caller (thread_1). Introduce a pause_sync call to the cpu_session which returns a semaphore capability to the caller. The caller blocks on the semaphore and is woken up when the pager of thread_p receives the recall exception with the state of thread_p.
2012-08-24 07:29:54 +00:00
#ifndef _INCLUDE__CPU_SESSION__CLIENT_H_
#define _INCLUDE__CPU_SESSION__CLIENT_H_
#include <base/rpc_client.h>
#include <base/thread.h>
#include <cpu_session/capability.h>
2012-08-08 15:12:10 +00:00
#include <nova_cpu_session/nova_cpu_session.h>
namespace Genode {
NOVA: extend cpu_session with synchronous pause The kernel provides a "recall" feature issued on threads to force a thread into an exception. In the exception the current state of the thread can be obtained and its execution can be halted/paused. However, the recall exception is only delivered when the next time the thread would leave the kernel. That means the delivery is asynchronous and Genode has to wait until the exception triggered. Waiting for the exception can either be done in the cpu_session service or outside the service in the protection domain of the caller. It turned out that waiting inside the cpu_service is prone to deadlock the system. The cpu_session interface is one of many session interfaces handled by the same thread inside Core. Deadlock situation: * The caller (thread_c) to pause some thread_p manages to establish the call to the cpu_session thread_s of Core but get be interrupted before issuing the actual pause (recall) command. * Now the - to be recalled thread_p - is scheduled and tries to invoke another service of Core, like making log output. * Since the Core thread_s is handling the session request of thread_c, the kernel uses the timeslice of thread_p to help to finish the request handled by thread_s. * Thread_s issues the actual pause/recall on thread_p and blocks inside Core to wait for the recall exception to be issued. * thread_p will leave not the kernel before finishing it actual IPC with thread_s which is blocked waiting for thread_p. That is the reason why the waiting/blocking for the recall exception taking place must be done on NOVA in the context of the caller (thread_1). Introduce a pause_sync call to the cpu_session which returns a semaphore capability to the caller. The caller blocks on the semaphore and is woken up when the pager of thread_p receives the recall exception with the state of thread_p.
2012-08-24 07:29:54 +00:00
struct Cpu_session_client : Rpc_client<Nova_cpu_session>
2012-08-08 15:12:10 +00:00
{
NOVA: extend cpu_session with synchronous pause The kernel provides a "recall" feature issued on threads to force a thread into an exception. In the exception the current state of the thread can be obtained and its execution can be halted/paused. However, the recall exception is only delivered when the next time the thread would leave the kernel. That means the delivery is asynchronous and Genode has to wait until the exception triggered. Waiting for the exception can either be done in the cpu_session service or outside the service in the protection domain of the caller. It turned out that waiting inside the cpu_service is prone to deadlock the system. The cpu_session interface is one of many session interfaces handled by the same thread inside Core. Deadlock situation: * The caller (thread_c) to pause some thread_p manages to establish the call to the cpu_session thread_s of Core but get be interrupted before issuing the actual pause (recall) command. * Now the - to be recalled thread_p - is scheduled and tries to invoke another service of Core, like making log output. * Since the Core thread_s is handling the session request of thread_c, the kernel uses the timeslice of thread_p to help to finish the request handled by thread_s. * Thread_s issues the actual pause/recall on thread_p and blocks inside Core to wait for the recall exception to be issued. * thread_p will leave not the kernel before finishing it actual IPC with thread_s which is blocked waiting for thread_p. That is the reason why the waiting/blocking for the recall exception taking place must be done on NOVA in the context of the caller (thread_1). Introduce a pause_sync call to the cpu_session which returns a semaphore capability to the caller. The caller blocks on the semaphore and is woken up when the pager of thread_p receives the recall exception with the state of thread_p.
2012-08-24 07:29:54 +00:00
explicit Cpu_session_client(Cpu_session_capability session)
2012-08-08 15:12:10 +00:00
: Rpc_client<Nova_cpu_session>(static_cap_cast<Nova_cpu_session>(session)) { }
Thread_capability create_thread(size_t weight, Name const &name, addr_t utcb = 0) {
return call<Rpc_create_thread>(weight, name, utcb); }
2012-08-08 15:12:10 +00:00
Ram_dataspace_capability utcb(Thread_capability thread) {
return call<Rpc_utcb>(thread); }
void kill_thread(Thread_capability thread) {
call<Rpc_kill_thread>(thread); }
int set_pager(Thread_capability thread, Pager_capability pager) {
return call<Rpc_set_pager>(thread, pager); }
int start(Thread_capability thread, addr_t ip, addr_t sp) {
return call<Rpc_start>(thread, ip, sp); }
NOVA: extend cpu_session with synchronous pause The kernel provides a "recall" feature issued on threads to force a thread into an exception. In the exception the current state of the thread can be obtained and its execution can be halted/paused. However, the recall exception is only delivered when the next time the thread would leave the kernel. That means the delivery is asynchronous and Genode has to wait until the exception triggered. Waiting for the exception can either be done in the cpu_session service or outside the service in the protection domain of the caller. It turned out that waiting inside the cpu_service is prone to deadlock the system. The cpu_session interface is one of many session interfaces handled by the same thread inside Core. Deadlock situation: * The caller (thread_c) to pause some thread_p manages to establish the call to the cpu_session thread_s of Core but get be interrupted before issuing the actual pause (recall) command. * Now the - to be recalled thread_p - is scheduled and tries to invoke another service of Core, like making log output. * Since the Core thread_s is handling the session request of thread_c, the kernel uses the timeslice of thread_p to help to finish the request handled by thread_s. * Thread_s issues the actual pause/recall on thread_p and blocks inside Core to wait for the recall exception to be issued. * thread_p will leave not the kernel before finishing it actual IPC with thread_s which is blocked waiting for thread_p. That is the reason why the waiting/blocking for the recall exception taking place must be done on NOVA in the context of the caller (thread_1). Introduce a pause_sync call to the cpu_session which returns a semaphore capability to the caller. The caller blocks on the semaphore and is woken up when the pager of thread_p receives the recall exception with the state of thread_p.
2012-08-24 07:29:54 +00:00
void pause(Thread_capability thread)
{
Native_capability block = call<Rpc_pause_sync>(thread);
if (!block.valid())
return;
Nova::sm_ctrl(block.local_name(), Nova::SEMAPHORE_DOWN);
}
2012-08-08 15:12:10 +00:00
void resume(Thread_capability thread) {
call<Rpc_resume>(thread); }
void cancel_blocking(Thread_capability thread) {
call<Rpc_cancel_blocking>(thread); }
Thread_state state(Thread_capability thread) {
return call<Rpc_get_state>(thread); }
void state(Thread_capability thread, Thread_state const &state) {
call<Rpc_set_state>(thread, state); }
2012-08-08 15:12:10 +00:00
void exception_handler(Thread_capability thread, Signal_context_capability handler) {
call<Rpc_exception_handler>(thread, handler); }
void single_step(Thread_capability thread, bool enable)
{
Native_capability block = call<Rpc_single_step_sync>(thread, enable);
if (!block.valid())
return;
Nova::sm_ctrl(block.local_name(), Nova::SEMAPHORE_DOWN);
}
2012-08-08 15:12:10 +00:00
Affinity::Space affinity_space() const {
return call<Rpc_affinity_space>(); }
void affinity(Thread_capability thread, Affinity::Location location) {
call<Rpc_affinity>(thread, location); }
Dataspace_capability trace_control() {
return call<Rpc_trace_control>(); }
unsigned trace_control_index(Thread_capability thread) {
return call<Rpc_trace_control_index>(thread); }
Dataspace_capability trace_buffer(Thread_capability thread) {
return call<Rpc_trace_buffer>(thread); }
Dataspace_capability trace_policy(Thread_capability thread) {
return call<Rpc_trace_policy>(thread); }
thread API & CPU session: accounting of CPU quota In the init configuration one can configure the donation of CPU time via 'resource' tags that have the attribute 'name' set to "CPU" and the attribute 'quantum' set to the percentage of CPU quota that init shall donate. The pattern is the same as when donating RAM quota. ! <start name="test"> ! <resource name="CPU" quantum="75"/> ! </start> This would cause init to try donating 75% of its CPU quota to the child "test". Init and core do not preserve CPU quota for their own requirements by default as it is done with RAM quota. The CPU quota that a process owns can be applied through the thread constructor. The constructor has been enhanced by an argument that indicates the percentage of the programs CPU quota that shall be granted to the new thread. So 'Thread(33, "test")' would cause the backing CPU session to try to grant 33% of the programs CPU quota to the thread "test". By now, the CPU quota of a thread can't be altered after construction. Constructing a thread with CPU quota 0 doesn't mean the thread gets never scheduled but that the thread has no guaranty to receive CPU time. Such threads have to live with excess CPU time. Threads that already existed in the official repositories of Genode were adapted in the way that they receive a quota of 0. This commit also provides a run test 'cpu_quota' in base-hw (the only kernel that applies the CPU-quota scheme currently). The test basically runs three threads with different physical CPU quota. The threads simply count for 30 seconds each and the test then checks wether the counter values relate to the CPU-quota distribution. fix #1275
2014-10-16 09:15:46 +00:00
int ref_account(Cpu_session_capability session) {
return call<Rpc_ref_account>(session); }
int transfer_quota(Cpu_session_capability session, size_t amount) {
return call<Rpc_transfer_quota>(session, amount); }
Quota quota() override { return call<Rpc_quota>(); }
thread API & CPU session: accounting of CPU quota In the init configuration one can configure the donation of CPU time via 'resource' tags that have the attribute 'name' set to "CPU" and the attribute 'quantum' set to the percentage of CPU quota that init shall donate. The pattern is the same as when donating RAM quota. ! <start name="test"> ! <resource name="CPU" quantum="75"/> ! </start> This would cause init to try donating 75% of its CPU quota to the child "test". Init and core do not preserve CPU quota for their own requirements by default as it is done with RAM quota. The CPU quota that a process owns can be applied through the thread constructor. The constructor has been enhanced by an argument that indicates the percentage of the programs CPU quota that shall be granted to the new thread. So 'Thread(33, "test")' would cause the backing CPU session to try to grant 33% of the programs CPU quota to the thread "test". By now, the CPU quota of a thread can't be altered after construction. Constructing a thread with CPU quota 0 doesn't mean the thread gets never scheduled but that the thread has no guaranty to receive CPU time. Such threads have to live with excess CPU time. Threads that already existed in the official repositories of Genode were adapted in the way that they receive a quota of 0. This commit also provides a run test 'cpu_quota' in base-hw (the only kernel that applies the CPU-quota scheme currently). The test basically runs three threads with different physical CPU quota. The threads simply count for 30 seconds each and the test then checks wether the counter values relate to the CPU-quota distribution. fix #1275
2014-10-16 09:15:46 +00:00
Capability<Native_cpu> native_cpu() override { return call<Rpc_native_cpu>(); }
NOVA: extend cpu_session with synchronous pause The kernel provides a "recall" feature issued on threads to force a thread into an exception. In the exception the current state of the thread can be obtained and its execution can be halted/paused. However, the recall exception is only delivered when the next time the thread would leave the kernel. That means the delivery is asynchronous and Genode has to wait until the exception triggered. Waiting for the exception can either be done in the cpu_session service or outside the service in the protection domain of the caller. It turned out that waiting inside the cpu_service is prone to deadlock the system. The cpu_session interface is one of many session interfaces handled by the same thread inside Core. Deadlock situation: * The caller (thread_c) to pause some thread_p manages to establish the call to the cpu_session thread_s of Core but get be interrupted before issuing the actual pause (recall) command. * Now the - to be recalled thread_p - is scheduled and tries to invoke another service of Core, like making log output. * Since the Core thread_s is handling the session request of thread_c, the kernel uses the timeslice of thread_p to help to finish the request handled by thread_s. * Thread_s issues the actual pause/recall on thread_p and blocks inside Core to wait for the recall exception to be issued. * thread_p will leave not the kernel before finishing it actual IPC with thread_s which is blocked waiting for thread_p. That is the reason why the waiting/blocking for the recall exception taking place must be done on NOVA in the context of the caller (thread_1). Introduce a pause_sync call to the cpu_session which returns a semaphore capability to the caller. The caller blocks on the semaphore and is woken up when the pager of thread_p receives the recall exception with the state of thread_p.
2012-08-24 07:29:54 +00:00
private:
Native_capability pause_sync(Thread_capability) {
return Native_capability(); }
Native_capability single_step_sync(Thread_capability, bool) {
return Native_capability(); }
2012-08-08 15:12:10 +00:00
};
}
NOVA: extend cpu_session with synchronous pause The kernel provides a "recall" feature issued on threads to force a thread into an exception. In the exception the current state of the thread can be obtained and its execution can be halted/paused. However, the recall exception is only delivered when the next time the thread would leave the kernel. That means the delivery is asynchronous and Genode has to wait until the exception triggered. Waiting for the exception can either be done in the cpu_session service or outside the service in the protection domain of the caller. It turned out that waiting inside the cpu_service is prone to deadlock the system. The cpu_session interface is one of many session interfaces handled by the same thread inside Core. Deadlock situation: * The caller (thread_c) to pause some thread_p manages to establish the call to the cpu_session thread_s of Core but get be interrupted before issuing the actual pause (recall) command. * Now the - to be recalled thread_p - is scheduled and tries to invoke another service of Core, like making log output. * Since the Core thread_s is handling the session request of thread_c, the kernel uses the timeslice of thread_p to help to finish the request handled by thread_s. * Thread_s issues the actual pause/recall on thread_p and blocks inside Core to wait for the recall exception to be issued. * thread_p will leave not the kernel before finishing it actual IPC with thread_s which is blocked waiting for thread_p. That is the reason why the waiting/blocking for the recall exception taking place must be done on NOVA in the context of the caller (thread_1). Introduce a pause_sync call to the cpu_session which returns a semaphore capability to the caller. The caller blocks on the semaphore and is woken up when the pager of thread_p receives the recall exception with the state of thread_p.
2012-08-24 07:29:54 +00:00
#endif /* _INCLUDE__CPU_SESSION__CLIENT_H_ */