UDP and IP fragmentation

To all,

I am running a Windows based high performance computing application that uses "reliable" multicast (29West) on a gigabit LAN. All systems are logically on the same VLAN and even on the same physical switch The application is set to use an 8k buffer and therefore results in IP fragmentation when datagrams are transmitted. The application is sensitive to any latency or data loss (of course) and uses a proprietary mechanism to create TCP-like retransmissions in case there is any actual data loss. Unfortunately, becasue of the fragmentation during the retransmission window all ip fragments must be resent even though only one may have been lost.

If the buffer size is tweeked to the ~1460 this may fix the fragmentation but will the side effects be less throughput and possibly more latency. Is there a sweet spot for UDP on an ethernet segment?


First, figure out whether all of the above matters. :slight_smile:

Invest in a switch and NIC infrastructure that lets you stuff said 8k frames into
an >8k jumbo frame. Then make sure you've read and understood QoS basics, including
the generic stuff (packet scheduling, queuing/dequeuing concepts); investigate
what various vendors claim their switches do and then actually look around for
feedback about what others have -seen-.

Finally, use all of that clue to make sure that the consultant you then hire to
do the work is actually doing their job.

No, I'm not (mostly) being facetious. It is mostly easy to get it "right" when
it works, but it is -not- right to get it "right enough" when it doesn't work.