Jeff said:
The exception was thrown by the BufferedInputStream read. The
ServerSocket.accept() completed and we are reading from the input stream
when the problem occurs.
It is conceivable that it is the network stack's packet queue that is
filling up. The network stack may be signaling that or some other error
condition on the socket that is not directly related to the current read
attempt.
It is also conceivable that the native network stack does not signal a
failure to allocate the send and receive buffers until the first attempt
to read from the socket, or that the Java Socket implementation does not
pass on the error until a read attempt it made.
We made the BufferedInputStream one meg to reduce the number of reads and
off-load message reassembly. Our proprietary messages can be up to one meg.
Rather than do multiple reads, then reassemble the packet, we let
BufferedInputStream assemble the packet. Faster and less to debug.
But it doesn't work that way. The BufferedInputStream will never be
able to read more from the socket at one go than the capacity of the
socket's receive buffer. The BufferedInputStream will not request
additional bytes from the socket until it needs to satisfy a request for
more bytes than are waiting unread in its own internal buffer. Thus, as
Esmond said, it is of negligible value to give the BufferedInputStream a
buffer larger than the socket's receive buffer.
That doesn't mean you can't perform an efficient read without copying.
This is a perfect case for performing your own buffering instead of
using a BufferedInputStream. Here's an example that actually does what
you thought your BufferedInputStream was doing for you.
final static int BUF_SIZ = 1024000;
[...]
InputStream bytesIn = mySocket.getInputStream();
byte[] buffer = new byte[BUF_SIZ];
int total = 0;
for (int numRead = bytesIn.read(buffer, total, BUF_SIZ - totalRead);
numRead > 0;
numRead = bytesIn.read(buffer, total, BUF_SIZ - totalRead)) {
total += numRead;
}
After buffering the whole message you can hand it off as the byte array
+ number of bytes, or wrap it up in a suitably configured
ByteArrayInputStream to package those into a single object. Do note
that if messages tend to be smaller than the maximum then you are
wasting memory. Unless you can determine the message size before
allocating the buffer, you have an unavoidable space / speed tradeoff
going on here.
Once the tcp receive buffer fills, it should not accept more packets. That
should be communicated to the sender by reducing the size of the receive
window in the tcp header. This is an ancient, reliable mechanism.
Which is apparently not working.
I think the problem lies in how the JVM accesses the tcp receive buffer. I
hoped to find more information on the interaction of the JVM with the tcp
stack.
To the most recent post, we are running on SuSE linux. We use JRockit.
I can't speak specifically to JRockit, but it is highly unlikely to be
messing about with the TCP implementation. If it accesses the network
stack by anything other than the standard system API then I would
complain to the vendor. It is conceivable that the VM or the
ServerSocket implementation is setting TCP options that you did not
explicitly ask for, but before I spent much effort in that direction I
would try to reduce the probability that the problem is at system level.
It shouldn't be too hard to write a bit bucket TCP service in C
against which you could run your probe to see whether the network stack
behaves similarly.
John Bollinger
(e-mail address removed)