CordaRPCClientImpl(session: <ERROR CLASS>, sessionLock: ReentrantLock, username: String)
Core RPC engine implementation, to learn how to use RPC you should be looking at CordaRPCClient.
The way RPCs are handled is fairly standard except for the handling of observables. When an RPC might return an Observable it is specially tagged. This causes the client to create a new transient queue for the receiving of observables and their observations with a random ID in the name. This ID is sent to the server in a message header. All observations are sent via this single queue.
The reason for doing it this way and not the more obvious approach of one-queue-per-observable is that we want
the queues to be
Creating a transient queue requires a roundtrip to the broker and thus doing an RPC that could return observables takes two server roundtrips instead of one. Thats why we require RPCs to be marked with RPCReturnsObservables as needing this special treatment instead of always doing it.
If the Artemis/JMS APIs allowed us to create transient queues assigned to someone else then we could potentially use a different design in which the node creates new transient queues (one per observable) on the fly. The client would then have to watch out for this and start consuming those queues as they were created.
We use one queue per RPC because we dont know ahead of time how many observables the server might return and often the server doesnt know either, which pushes towards a single queue design, but at the same time the processing of observations returned by an RPC might be striped across multiple threads and wed like backpressure management to not be scoped per client process but with more granularity. So we end up with a compromise where the unit of backpressure management is the response to a single RPC.
TODO: Backpressure isnt propagated all the way through the MQ broker at the moment.