F
final74
I have written a Java applicaion that connects via a socket to a
third-party server. The server comms are byte-oriented and I've been
having problems with partial messages being returned by from it. I have
therefore used DataInputStream and DataOutputStream with no buffering
or filter wrappers so as to properly monitor the byte streams to and
from the server.
As soon as I connect, the server sends a response to which I must
reply. Everything proceeds correctly until a point where I expect a
'prompt' from the sever. However, my client actually only receives the
first 8 bytes of the prompt. If I subsequently send an arbitrary
message to the server the client then
receives the remainder of the message. If I then respond to this
prompt, I get no response whatsoever from the server.
The forms of the 'send' and the 'receives' I use are as follows:
[RECEIVE]
byte[] m;
m = new byte[300];
count = in.read(m);
...where in is a DataInputStream. I check the return value,
count.
[SEND]
out.write(m,0,m.length);
out.flush();
..where out is a DataOutputStream, and m is byte[];
Judging by the symptoms, it looks like the stream/socket is becoming
'clogged', somehow. I thought I would avoid this by invoking flush()
during a 'send', thus clearing the socket prior to a response from the
server being transmitted over it. I also always understood that the
underlying TCP/IP would guarantee that the entire message would be sent
but not necessarily in one read. At first sight this appears to be what
is happening, hoever I thought TCP/IP's 'chunking' was dependent upon
an internal limit. Eight bytes seems to be too short for this, however.
I have considered re-designing the app. to use threads and
synchronization techniques. However, this seemed like overkill
given my assumptions regarding flush(), above. I have a feeling I'm
missing some other point, so any insights would
be gratefully received. Thanks.
third-party server. The server comms are byte-oriented and I've been
having problems with partial messages being returned by from it. I have
therefore used DataInputStream and DataOutputStream with no buffering
or filter wrappers so as to properly monitor the byte streams to and
from the server.
As soon as I connect, the server sends a response to which I must
reply. Everything proceeds correctly until a point where I expect a
'prompt' from the sever. However, my client actually only receives the
first 8 bytes of the prompt. If I subsequently send an arbitrary
message to the server the client then
receives the remainder of the message. If I then respond to this
prompt, I get no response whatsoever from the server.
The forms of the 'send' and the 'receives' I use are as follows:
[RECEIVE]
byte[] m;
m = new byte[300];
count = in.read(m);
...where in is a DataInputStream. I check the return value,
count.
[SEND]
out.write(m,0,m.length);
out.flush();
..where out is a DataOutputStream, and m is byte[];
Judging by the symptoms, it looks like the stream/socket is becoming
'clogged', somehow. I thought I would avoid this by invoking flush()
during a 'send', thus clearing the socket prior to a response from the
server being transmitted over it. I also always understood that the
underlying TCP/IP would guarantee that the entire message would be sent
but not necessarily in one read. At first sight this appears to be what
is happening, hoever I thought TCP/IP's 'chunking' was dependent upon
an internal limit. Eight bytes seems to be too short for this, however.
I have considered re-designing the app. to use threads and
synchronization techniques. However, this seemed like overkill
given my assumptions regarding flush(), above. I have a feeling I'm
missing some other point, so any insights would
be gratefully received. Thanks.