Knowledgebase: Pre-Sales
How does UDP buffer sizing affect performance of FileCatalyst
Posted by , Last modified by Aly Essa on 27 October 2016 03:25 PM

You can find configuration information for UDP buffers here:  https://access.redhat.com/site/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/jgroups-perf-udpbuffer.html

Please give this page a read.


This article explains the impact of UDP buffer sizes on high-speed transfers. It also tells how to increase the values for Linux.
Too little UDP buffer space causes the operating system kernel to discard UDP packets. The resulting packet loss has consequences described below.

Latency--The time that passes between the initial transmission of a UDP packet and the eventual successful reception of a retransmission is latency that could have been avoided were it not for the intervening loss.

Bandwidth--Assuming that the initial transmission was not frivolous, it's likely that UDP loss will result in retransmission. Bandwidth used for retransmissions may become significant, especially in cases where there is a large amount of loss or a large number of receivers experiencing loss. 

CPU Time--UDP loss causes the receiver to use CPU time to detect the loss, request one or more retransmissions, and perform the repair. Note that efficiently dealing with loss among a group of receivers requires the use of many timers, often of short duration. Scheduling and processing such timers generally require CPU time in both the operating system kernel ("system time") and in the application receiving UDP ("user time"). Additional CPU time is required to switch between kernel and user modes.

On the sender, CPU time is used to process retransmission requests and to send retransmissions as appropriate. As on the receiver, many timers are required for efficient retransmission processing, thus requiring many switches between kernel and user modes.

Memory--UDP receivers that can only process data in the order that it was initially sent must allocate memory while waiting for retransmissions to arrive. UDP loss causes such receivers to receive data in an order different than that used by the sender. Memory is used to restore the order in which it was initially sent.

Perhaps the two most significant consequences of too much UDP buffer space are slower recovery from loss and physical memory usage. 

Each of these is discussed in turn below.

Slower Recovery--To best understand the consequences of too much UDP buffer space, consider a stream of packets that regularly updates the current value of a rapidly-changing variable in every tenth packet. Why buffer more than ten packets? Doing so would only increase the number of stale packets that must be discarded at the application layer. Given a data stream like this, it's generally better to configure a ten-packet buffer in the kernel so that no more than ten stale packets have to be read by the application before a return to fresh ones from the stream.

It's often counter-intuitive, but excessive UDP buffering can actually increase the recovery time following a large packet loss event. UDP receive buffers should be sized to match the latency budget allocated for CPU scheduling latency with knowledge of expected data rates. 

Physical Memory Usage--It is possible to exhaust available physical memory with UDP buffer space. Requesting a UDP receive buffer of 32 MB and then invoking ten receiver applications uses 320 MB of physical memory. 

Assuming that an average rate is known for a UDP data stream, the amount of latency that would be added by a full UDP receive buffer can be computed as:

    Max Latency = Buffer Size / Average Rate

Note: Take care to watch for different units in buffer size and average rate (e.g. kilobytes vs. megabits per second).

Assuming that an average rate is known for a UDP data stream, the buffer size needed to avoid loss a given worst case CPU scheduling latency can be computed as:

    Buffer Size = Max Latency * Average Rate

Note: Since data rates are often measured in bits per second while buffers are often allocated in bytes, careful conversion may be necessary.


The kernel variable that limits the maximum size allowed for a UDP receive buffer has different names and default values by kernel given in the following :

Default UDP buffers:
Linux net.core.rmem_max 131071 
Solaris udp_max_buf 262144 
FreeBSD, Darwin kern.ipc.maxsockbuf 262144 
AIX sb_max 1048576 


Windows None we know of Seems to grant all reasonable requests 

The examples in this table give the commands needed to set the kernel UDP buffer limit to 8 MB. Root privilege is required to execute these commands.

Recommended UDP Buffers (8+MB):
Linux sysctl -w net.core.rmem_max=8388608 
Solaris ndd -set /dev/udp udp_max_buf 8388608 
FreeBSD, Darwin sysctl -w kern.ipc.maxsockbuf=8388608 
AIX no -o sb_max=8388608 (note: AIX only permits sizes of 1048576, 4194304 or 8388608) 


Making Changes Survive Reboot
The AIX command given above will change the current value and automatically modify /etc/tunables/nextboot so that the change will survive rebooting. Other platforms require additional work described below to make changes survive a reboot.

For Linux and FreeBSD, simply add the sysctl variable setting given above to /etc/sysctl.conf leaving off the sysctl -w part.

We haven't found a convention for Solaris, but would love to hear about it if we've missed something. We've had success just adding the ndd command given above to the end of /etc/rc2.d/S20sysetup.


Interpreting the output of netstat is important in detecting UDP loss. Unfortunately, the output varies considerably from one flavor of Unix to another. Hence, we can't give one set of instructions that will work with all flavors.

For each Unix flavor, we tested under normal conditions and then under conditions forcing UDP loss while keeping a close eye on the output of netstat -s before and after the tests. This revealed the statistics that appeared to have a relationship with UDP packet loss. Output from Solaris and FreeBSD netstat was the most intuitive; Linux and AIX much less so. Following sections give the command we used and highlight the important output for detecting UDP loss.

Detecting Solaris UDP Loss
Use netstat -s. Look for udpInOverflows. It will be in the IPv4 section, not in the UDP section as you might expect. For example:

IPv4:
udpInOverflows = 82427

Detecting Linux UDP Loss
Use netstat -su. Look for packet receive errors in the Udp section. For example:

Udp:
38799 packet receive errors

Detecting Windows UDP Loss
Use netstat -s. Look for Receive Errors in the UDP Statistics for IPv4 section. For example:

UDP Statistics for IPv4
Receive Errors = 131213

Detecting AIX UDP Loss
Use netstat -s. Look for fragments dropped (dup or out of space) in the ip section. For example:

ip:
77070 fragments dropped (dup or out of space)
7.9.5. Detecting FreeBSD and Darwin UDP Loss
Use netstat -s. Look for dropped due to full socket buffers in the udp section. For example:

udp:
6343 dropped due to full socket buffers