= Message Service = Messaging constitutes the fundamental transport of messages and infrastructure and classes designed to make certain message paradigms simple. There are 3 underlying transports to the SAFplus7 messaging layer. 1. IOC messaging C API . This is a thin layer on top of IOC messaging itself and can have multiple transports, TIPC and UDP are currently supported. Applications can create their own message server listening to any IOC port. 1. SAFplus messaging. . All SAFplus components and user applications need an IOC message server to carry all SAFplus library communications. The IOC port involved is well-known for SAFplus services and is either well-known or dynamically assigned for user applications. Since this single port is handling multiple protocols (each SAFplus library speaks with its own protocol) messages are contained within a larger protocol that identifies the contained protocol. It is possible for user applications to register their own sub-protocol and receive notifications when messages arrive. In this manner, applications can take advantage of much of the SAFplus message infrastructure. (In SAFplus 6.1, this capability is called the "EO" -- in SAFplus7, the 6.1 "EO" will not be used). 1. SA-Forum Message Queues . See the Service Availability Forum documentation == Synchronous and Asynchronous == Message receipt can occur either synchronously or asynchronously using the "Wakeable" feature of the SAFplus thread semaphore system. == Use Cases == === Server Side === 1. Simple message server . Call a Wakeable.wake() whenever a message is received. This Wakeable can be a callback, a message queue, or any derived class. wake() returns ACCEPTED if the message has been fully processed and this implicitly hands the message buffer back to the message server. Otherwise, wake() returns DEFERRED and retains ownership of the message buffer, and if the message was delivered reliably the message is NOT yet marked as delivered. At some later point, the application will call msgServer.accept(msg*). This gives ownership of the message buffer back to the message server and tells the server to mark the message as delivered. Not threaded: call an API "process(enum { ONE or ALL or FOREVER})" to make the above happen. 1. SAFplus message server . Register sub-protocols by passing a well-known ID (256 bytes). Every received message has a small header: version, sub-protocol id This ID is examined in every received message and the appropriate Wakeable is called (from an array of them). The same ACCEPTED/DEFERRED behavior applies as in the simple message server. Not threaded: call an API "process(enum { ONE or ALL or FOREVER})" to make the above happen. {{{#!highlight cpp /** Handle a particular type of message @param type A number from 0 to 255 indicating the message type @param handler Your handler function @param cookie This pointer will be passed to you handler function */ void RegisterHandler(ClWordT type, MsgHandler handler, ClPtrT cookie); /** Remove the handler for particular type of message */ void RemoveHandler(ClWordT type); }}} 1. Threaded,Pooled, Queued SAFplus or Simple message server . This is the "final" class describing the most common and powerful message server combination. It adds a thread pool (set max threads to 0 if you don't want them) and/or queue (set max queue size to 1 if you dont want a queue) to the above and the server does the call-backs in multiple threads. There will be one Threaded, Pooled, Queued SAFplus Message Server automatically created for every SAFplus component. This is how SAFplus libraries talk to each other, similar to the "EO" in SAFplus 6.1. But note that the application can also register sub-protocols via the RegisterHandler API and therefore leverage a lot of SAFplus infrastructure. {{{#!highlight cpp // This is the singleton message server; one per process. extern SAFplusMsgServer safplusMsgServer; }}} === Client Side === 1. Message client with multithreaded sync/async send/reply . Client is capable of issuing multiple "sends" and then waiting for the multiple replies. For a particular reply, it figures out which send it pairs to and "wake"s that entity. {{{#!highlight cpp // Multi-threaded synchronous for (int i=0;i; msg* packet = msgClient.sendReply(node_index_to_address(i),buffer,buffer_ownership_transferred); // No wakeable passed so synchronous } // Simultaneous Synchronous queue replyQueue; // replyQueue is a Wakeable that implements wake to put the reply (the passed cookie) onto the queue. msg* buffer = msgClient.getBuffer(); buffer = ; for (int i=0;i; for (int i=0;i" program on the test machine and another machine (so run 2 copies of this program, one local, on remote). Next, on the test machine run: {{{ testMsgPerf -rnode }}} If testMsgPerf hangs after printing an info message about the next group of tests, it can't communicate with msgReflector. data is output in the following table: {{{ same process: len [ 1] Latency [0.011307 ms] same process: len [ 16] Latency [0.009429 ms] same process: len [ 100] Latency [0.009582 ms] same process: len [ 1000] Latency [0.009511 ms] same process: len [ 10000] Latency [0.010997 ms] same process: len [ 1] stride [ 1] Bandwidth [203682.58 msg/s, 1.63 MB/s] same process: len [ 16] stride [ 1] Bandwidth [216197.52 msg/s, 27.67 MB/s] same process: len [ 100] stride [ 1] Bandwidth [161807.06 msg/s, 129.45 MB/s] same process: len [ 1000] stride [ 1] Bandwidth [187099.49 msg/s, 1496.80 MB/s] same process: len [ 10000] stride [ 1] Bandwidth [177999.29 msg/s, 14239.94 MB/s] same process: len [ 50000] stride [ 1] Bandwidth [73956.29 msg/s, 29582.52 MB/s] same process: len [ 1] stride [ 10] Bandwidth [247036.49 msg/s, 1.98 MB/s] same process: len [ 16] stride [ 10] Bandwidth [246532.52 msg/s, 31.56 MB/s] same process: len [ 100] stride [ 10] Bandwidth [226515.96 msg/s, 181.21 MB/s] same process: len [ 1000] stride [ 10] Bandwidth [218375.18 msg/s, 1747.00 MB/s] same process: len [ 10000] stride [ 10] Bandwidth [194067.36 msg/s, 15525.39 MB/s] same process: len [ 50000] stride [ 10] Bandwidth [72648.28 msg/s, 29059.31 MB/s] same process: len [ 1] stride [ 50] Bandwidth [257805.18 msg/s, 2.06 MB/s] same process: len [ 16] stride [ 50] Bandwidth [221818.02 msg/s, 28.39 MB/s] same process: len [ 100] stride [ 50] Bandwidth [249569.24 msg/s, 199.66 MB/s] same process: len [ 1000] stride [ 50] Bandwidth [234750.05 msg/s, 1878.00 MB/s] same process: len [ 10000] stride [ 50] Bandwidth [173288.30 msg/s, 13863.06 MB/s] same process: len [ 50000] stride [ 50] Bandwidth [80331.67 msg/s, 4530.92 MB/s] same process: len [ 32] stride [ 1000] Bandwidth [254485.17 msg/s, 65.15 MB/s] }}} = Message Reliable = == Socket Reliable == Socket reliable is a thin layer on top of transport layer itself. This socket should meet the following criteria: * provide reliable delivery up to a maximum number of retransmissions. * provide in-order delivery. * be a message based. * have high performance. * support peer to peer connection. * support multiple connection * support fragmentation of larger messages The programming interface provides MsgReliableSocket, MsgReliableSocketServer that extend the conventional C++ Socket's programming interface, MsgReliableSocketClient, ReliableFragment classes. === Detail Design === ==== Client side ==== Sender split message into multiple reliable fragments and sends it out. Reliable fragments are stored on the unacknowledged queue until receiving ACK from Receiver.<
> When an ACK fragment is received. All fragments that fragment Id less than ACK Number are removed. Retransmission occurs as a result of receiving an NACK segment or the time-out of the Retransmission timer. * When an NACK fragment is received. The fragments specified in the message are removed from the unacknowledged sent queue. The fragments to be retransmitted are determined by examining the Ack Number and the last out of sequence ack number in the NACK segment. All fragments between but not including these two sequence numbers that are on the unacknowledged sent queue are retransmitted. * When a retransmission time-out occurs, all fragments on the unacknowledged sent queue are retransmitted. ==== Server side ==== There are 2 reliable fragment queues : * out-of-sequence queue * in-sequence queue. When receiving a fragment, fragment is put into one of these queue based on fragment Id and the last Fragment Id of in-sequence queue. Fragment is deleted if duplicate. When one fragment is put into in-sequence queue, receiver will check the out-of sequence queue to move fragment to in-sequence queue. ==== Multiple connection ==== Server can receive message from multiple client in parallel . Each Receive contains multiple reliable socket to handle multiple connections. When client initiates a connection its sends a SYN segment which contains configuration parameter to server. Server will create a reliable socket to receive data from the server . This socket is put into socket client list. Each socket in socket client list contains a thread to receive fragments , combine all fragments and notify the server to read the message. ==== Peer to Peer ==== Server can send and receive message in parallel. ==== Piggyback acknowledgments ==== Whenever a receiver sends a data, null, or reset segment to the transmitter, the receiver includes the sequence number of the last in-sequence data. === API === * Create message server {{{ MsgServer(uint_t port, uint_t maxPendingMsgs, uint_t maxHandlerThreads, Options flags=DEFAULT_OPTIONS, SocketType type = SOCK_DEFAULT); }}} . To create reliable MsgServer, set SocketType to SOCK_RELIABLE === Test === * testMsgServerReliable : This test implements a variety of functional tests to ensure that the message reliable behaves correctly. Success output looks like: {{{ ... Fri Mar 11 14:27:14.628 2016 [testMsgServerReliable.cxx:173] (.0.585 : .TST.---:00000 : INFO) (main): Test case completed [MSG-SVR-UNT.TC001: simple send/recv test]. Subcases: [8] passed, [0] failed, [0] malfunction. Fri Mar 11 14:27:14.628 2016 [testMsgServerReliable.cxx:173] (.0.585 : .TST.___:00000 : INFO) Test case completed [MSG-SVR-UNT.TC001: simple send/recv test]. Subcases: [8] passed, [0] failed, [0] malfunction. Fri Mar 11 14:27:14.628 2016 [clTest.cxx:74] (.0.585 : .TST.---:00000 : INFO) (clTestGroupFinalizeImpl): Test completed. Cases: [1] passed, [0] failed, [0] malfunction. }}} = Message Shaping = The leaky bucket algorithm is a method of temporarily storing a variable number of message and organizing them into a set-rate output of message. The leaky bucket is used to implement message Shaping. Message Shaping can be used to control metered-bandwidth Internet connections to prevent going over the allotted bandwidth.