datagram queue length

J

Jonathan Ellis

I seem to be running into a limit of 64 queued datagrams. This isn't a
data buffer size; varying the size of the datagram makes no difference
in the observed queue size. If more datagrams are sent before some are
read, they are silently dropped. (By "silently," I mean, "tcpdump
doesn't record these as dropped packets.") This is a problem because
while my consumer can handle the overall load easily, the requests
often come in large bursts.

This only happens when the sending and receiving processes are on
different machines, btw.

Can anyone tell me where this magic 64 number comes from, so I can
increase it?

Illustration follows.

-Jonathan

# <receive udp requests>
# start this, then immediately start the other
# _on another machine_
import socket, time

sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(('', 3001))

time.sleep(5)

while True:
data, client_addr = sock.recvfrom(8192)
print data

# <separate process to send stuff>
import socket

for i in range(200):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.sendto('a' * 100, 0, ('***other machine ip***', 3001))
sock.close()
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,226
Members
46,815
Latest member
treekmostly22

Latest Threads

Top