You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps will reproduce the problem?
1.to start 10 process on machine A to subscribe message published from machine B
2.machine B publish message quickly, for example (size: 2000bytes, count:
10000), with high message rate delivery(700Mb/s)
3.unrecoverable data loss occurs every time run the test
What is the expected output? What do you see instead?
I expect that data loss will be repaired by retransmission by sender finally,
and behaviour seen from trace log is weried, retry count up to 50(hardcoding by
0mq) and retransmission was canncelled.
What version of the product are you using? On what operating system?
openPGM packaged in zeromq-2.1.11, linux 2.6.18-194.8.1.el5
Please provide any additional information below.
Refers to sender's log(log level: DEBUG), pgm_on_deferred_nak() was only be
called one time when new message(NAK) arrived in pgm_recvmsgv(), why not flush
all RDATA like wait_for_event() in blocking mode? If there is a lot of NAKs,
sender will have little chance to do repair work.
Original issue reported on code.google.com by [email protected] on 26 Apr 2012 at 3:20
The text was updated successfully, but these errors were encountered:
Original issue reported on code.google.com by
[email protected]
on 26 Apr 2012 at 3:20The text was updated successfully, but these errors were encountered: