How to know that the server side has close the connection when usingjava socket?

L

lightning

I have some sockets connected to the server when I init a socket pool,
after using one socket, I would put the socket back into the pool for
the next time to use, but sometimes the server goes out and it close
some of my sockets.
How do I know that?

It seems that isConnected() , isClosed(),.... etc cannot tell me the
real fact. Only when I read from the inputstream I get -1. Is this the
only way? What's the best practise in this situation?

Now the solution of mine is:
1, setSoTimeout(1)
2, wrap the socket 's inputstream to become a PushbackInputStream
3, read one byte from the inputstream,if the return value is not -1 or
read() throw a timeoutexception , it is cool, otherwise I need to
purge the socket

I need advices....
 
E

EJP

lightning said:
I have some sockets connected to the server when I init a socket pool,
after using one socket, I would put the socket back into the pool for
the next time to use, but sometimes the server goes out and it close
some of my sockets.
How do I know that?

By trying to use the socket.
It seems that isConnected() , isClosed(),.... etc cannot tell me the
real fact.

Correct. They're not specified to do that, and they can't do that. There
is no API in TCP/IP that will tell you whether a connection is closed,
other than the result of trying to read from or write to it.
Only when I read from the inputstream I get -1. Is this the
only way?

A write will throw an IOException. These are the only two ways.
What's the best practise in this situation?

Socket read timeouts.
Now the solution of mine is:
1, setSoTimeout(1)
2, wrap the socket 's inputstream to become a PushbackInputStream
3, read one byte from the inputstream,if the return value is not -1 or
read() throw a timeoutexception , it is cool, otherwise I need to
purge the socket

RMI does connection pooling. Its technique is to send an
RMI-protocol-defined PING request whenever it wants to reuse the socket.
If this doesn't result in a successful reply within a very short period
of time the socket is closed and a new one created.
 
L

lightning

By trying to use the socket.


Correct. They're not specified to do that, and they can't do that. There
is no API in TCP/IP that will tell you whether a connection is closed,
other than the result of trying to read from or write to it.


A write will throw an IOException. These are the only two ways.


Socket read timeouts.


RMI does connection pooling. Its technique is to send an
RMI-protocol-defined PING request whenever it wants to reuse the socket.
If this doesn't result in a successful reply within a very short period
of time the socket is closed and a new one created.

thx!you're so cool!
 
J

jolz

I have some sockets connected to the server when I init a socket pool,
A write will throw an IOException. These are the only two ways.

This is the only way (there's also SO_KEEPALIVE but it also writes).
read() won't help determining if socket is closed since it may block
forever.
Socket read timeouts.

Only if socket is manually closed after a timeout. However I prefer
pings in the protocol, especially on the server side. Timeouts on client
side are acceptable since usually server responds to client's requests.
But often client doesn't have to send requests and still it may not be a
good idea to disconect it.
If one can live with 2 hours delay before disconnect detection then
SO_KEEPALIVE, if available, is the easiest way. And also makes easier to
implement synchronous communication than protocol with pings.
 
E

EJP

jolz said:
This is the only way (there's also SO_KEEPALIVE but it also writes).
read() won't help determining if socket is closed since it may block
forever.

As the 'other way' I stated was a read timeout, this comment doesn't
make any sense.
Only if socket is manually closed after a timeout.

The timeout tells you there is something wrong. What you do about it is
up to you.

A curious post.
 
J

jolz

As the 'other way' I stated was a read timeout

That's wrong. OP asked how to detect if socket is closed. It can't be
done with timeout. Timeout may occure if nothing is written for
specified time.
The timeout tells you there is something wrong.

Not allways. For example chat client may by connected to the server and
not send and messages. Server may choose to not disconnect idle clients.
But if pings are implemented, server will know if client is idle but
connected, or if socket was closed.
 
M

Martin Gregorie

That's wrong. OP asked how to detect if socket is closed. It can't be
done with timeout. Timeout may occure if nothing is written for
specified time.


Not allways. For example chat client may by connected to the server and
not send and messages. Server may choose to not disconnect idle clients.
But if pings are implemented, server will know if client is idle but
connected, or if socket was closed.

Besides, if the server has posted a read and is waiting for it to return
when the client closes the connection or crashes, the read request will
return a read count of zero. Of course it goes without saying that the
same applies at the client end of the server closes the socket without
sending anything.

IME this is the only way you can detect the socket being closed. Writing
code to use this as its usual way of detecting socket closure (rather
than waiting for some sort of 'CLOSE' command to be sent) has the
advantage that the reader can clean up tidily regardless of whether the
other end called close(), crashed or was killed. It works well regardless
of whether the server uses read timeouts or not. However, using read
timeouts just wastes CPU cycles unless the server has timeout activities,
such as logging a user out, that it is required to do.
 
E

EJP

Martin said:
Besides, if the server has posted a read and is waiting for it to return
when the client closes the connection or crashes, the read request will
return a read count of zero.

No, the read request will return a read count of -1 in Java.
IME this is the only way you can detect the socket being closed.

There are two ways to detect an orderly close: by reading and by
writing; and one way to detect a disorderly close: by writing.

I'm thoroughly confused as to what is being said in this thread now.
I've already been misquoted by another poster. Let's start again. The OP
asked how to detect socket closure, by which he really meant connection
closure or loss. There are two ways: read() returning -1 or write()
throwing a connection reset exception. Read timeouts *with an
appropriate timeout value* can also be used to *indicate* dead
connections, or keepalive if enabled and you can wait two hours, and
pplication-level pings can also do that, if they can be fitted into the
application protocol: frequently they can't.
using read timeouts just wastes CPU cycles

Using read timeouts doesn't 'waste CPU cycles'. Any networking program
that doesn't use a read timeout is improperly written IMO, *because*
there are conditions under which a read can block forever.
 
M

Martin Gregorie

No, the read request will return a read count of -1 in Java.
Not always, though testing for < 1 is probably a good idea. A socket that
is closed for any reason when your program is waiting on a read returns
zero to both C and Java in a Linux or SVR4 environment.
There are two ways to detect an orderly close: by reading and by
writing; and one way to detect a disorderly close: by writing.
I suppose that depends on the protocol you use: for small data amounts I
typically use command/response pairs and (often) a stateless server.
Using read timeouts doesn't 'waste CPU cycles'. Any networking program
that doesn't use a read timeout is improperly written IMO, *because*
there are conditions under which a read can block forever.
Read what I said: if you don't take any specific action, such as logging
out an idle user, when you get a read timeout then

read()
{
process the message
}

and

read()
{
if time_out
continue
else
process the message
}

are functionally identical, but the second example wastes cycles each
time it ignores the timeout and re-posts the read.

I've seen code that does just this for no apparent reason: it may be
harmless if there are only a few connections, but it can cause
performance problems if the application is watching a large number of
sockets.
 
E

EJP

Martin said:
A socket that
is closed for any reason when your program is waiting on a read returns
zero to both C and Java in a Linux or SVR4 environment.

No. It returns 0 in the C Sockets API. It is specified to return < 0 in
Java, and it does, on every platform I've ever used. Check your facts.
In Java it only returns zero on a Channel in non-blocking mode when
there is no data.
the second example wastes cycles each
time it ignores the timeout and re-posts the read.

Compared with the length of any *sensible* timeout, any such 'waste' is
infinitesimal. A sensible timeout should be several seconds at least.
But I agree that just cycling on a timeout is completely pointless -
what was it for?
 
M

Martin Gregorie

No. It returns 0 in the C Sockets API. It is specified to return < 0 in
Java, and it does, on every platform I've ever used. Check your facts.
In Java it only returns zero on a Channel in non-blocking mode when
there is no data.
Long ago (JDK 1.2 IIRC) I buried this stuff in a wrapper that makes
request/response handling easy and haven't looked at it since:
consequently I'd forgotten the details of its inner working.

The wrapper receive() method returns a zero-length String if the socket
is closed and returns a non-empty string in other circumstances.
Internally it uses this read loop:

n = 1;
while (n > 0)
{
byte [] received = new byte[MAXBUFF];
lth = in.read(received);
if (lth >= 0)
buff.append(new String(received, 0, lth));

n = in.available();
}

in.read() is a blocking read with no timeout. In practise the loop only
returns without reading anything if the socket is closed at the client
end.
Compared with the length of any *sensible* timeout, any such 'waste' is
infinitesimal. A sensible timeout should be several seconds at least.
But I agree that just cycling on a timeout is compltely pointless -
what was it for?
Pass. It may have been left over from debugging the program: sometimes it
can be useful to see a server process 'ticking' in a trace log while its
idle.
 
E

EJP

Martin said:
n = 1;
while (n > 0)
{
byte [] received = new byte[MAXBUFF];
lth = in.read(received);
if (lth >= 0)
buff.append(new String(received, 0, lth));

n = in.available();
}

Well there are *lots* of problems with this code.

1. Don't use available() == 0 to indicate completion of a request, or to
indicate EOS either; it doesn't mean that.

2. If the socket is an SSLSocket, available() will always return 0. Some
InputStreams do that as well.

3. You're allocating a new buffer every time around the loop. This is
churning memory. Move that buffer declaration outside both loops.

4. If lth < 0 you should break the loop *and* close the socket.

5. You're assuming that every chunk of bytes read, of whatever length,
can be turned into a String. This assumption isn't valid.

EJP
 
M

Martin Gregorie

Martin said:
n = 1;
while (n > 0)
{
byte [] received = new byte[MAXBUFF]; lth =
in.read(received);
if (lth >= 0)
buff.append(new String(received, 0, lth));

n = in.available();
}

Well there are *lots* of problems with this code.
Not really: it works well within its intended application area. It is
part of a suite of classes (ClientConnection, ListenerConnection,
ServerConnection) that use standard TCP/IP connections to transfer ASCII
request/response message pairs between clients and a server.
1. Don't use available() == 0 to indicate completion of a request, or to
indicate EOS either; it doesn't mean that.
It does what I intended: it will allow messages that have been sent as
single units to be reassembled if the TCP/IP stack should fragment them.
A more robust approach would be to precede every message with a fixed
length byte count and then read until the right number of bytes have been
received but so far that has not been necessary.
2. If the socket is an SSLSocket, available() will always return 0. Some
InputStreams do that as well.
It never will be: the ServerConnection class implementing the receive()
method is only created by a companion ListenerConnection class that
constructs it from a vanilla Socket.
3. You're allocating a new buffer every time around the loop. This is
churning memory. Move that buffer declaration outside both loops.
That may be an issue in C. Java has good gc handling for transient
objects, so the scope extension that would require is not necessary.
4. If lth < 0 you should break the loop *and* close the socket.
That's one way. I prefer to use explicit open() and close() operations
for classes that handle files and sockets. This makes correctness by
inspection easier for the calling code as well as ensuring that
constructors don't throw exceptions.
5. You're assuming that every chunk of bytes read, of whatever length,
can be turned into a String. This assumption isn't valid.
It is in the limited application scope for which this method was
implemented. See above.
 
E

EJP

Martin said:
It does what I intended: it will allow messages that have been sent as
single units to be reassembled if the TCP/IP stack should fragment them.

and if the intermediate routers haven't delayed the packets so that the
tail of the request isn't in the socket receive buffer when you test
available(). Your code may work from now till doomsday but there's
nothing in TCP/IP that guarantees it.
That may be an issue in C. Java has good gc handling for transient
objects, so the scope extension that would require is not necessary.

Still a good idea, nu? Better to avoid avoidable work.
That's one way.

You should +certainly+ break the loop as soon as you detect -1. This is
SOP, not something I invented overnight ...
 
N

Nigel Wade

Martin said:
Martin said:
n = 1;
while (n > 0)
{
byte [] received = new byte[MAXBUFF]; lth =
in.read(received);
if (lth >= 0)
buff.append(new String(received, 0, lth));

n = in.available();
}

Well there are *lots* of problems with this code.
Not really: it works well within its intended application area. It is
part of a suite of classes (ClientConnection, ListenerConnection,
ServerConnection) that use standard TCP/IP connections to transfer ASCII
request/response message pairs between clients and a server.
1. Don't use available() == 0 to indicate completion of a request, or to
indicate EOS either; it doesn't mean that.
It does what I intended: it will allow messages that have been sent as
single units to be reassembled if the TCP/IP stack should fragment them.

Actually it won't. It might some of the time, in fact it might all of the time -
up until the first time it doesn't. There is no such guarantee from
available(), all it tells you is how much data is waiting in the input buffer.
The TCP/IP stack will present whatever data is available as it becomes
available. If only some of the packets have arrived, provided they are complete
in sequence up to that point, those packets are likely to be available() and
you will only get a partial message.

TCP/IP does not know anything about "message" or "record" structures within the
data stream because TCP/IP has no concept of structure. It is a simple
sequential byte stream. Attempting to use available() to add structure to the
stream is ultimately doomed to failure. In practice it might work forever, but
it is a fatally flawed algorithm.
A more robust approach would be to precede every message with a fixed
length byte count and then read until the right number of bytes have been
received but so far that has not been necessary.

It is the only valid way to read a specific number of bytes from a TCP/IP
stream. You've been lucky so far, and your luck will almost certainly run out
eventually. Algorithms that rely on luck should be reserved for gambling
establishments ;-)
 
M

Martin Gregorie

It is the only valid way to read a specific number of bytes from a
TCP/IP stream. You've been lucky so far, and your luck will almost
certainly run out eventually. Algorithms that rely on luck should be
reserved for gambling establishments ;-)
Fair comment. That's something I should have spotted and didn't.

On thinking about it, this code has so far only been used across a
minimum/no delay network, which is doubtless why its been working OK. As
I can easily and transparently convert the classes to use a message
length count I'll do just that.
 
N

Nigel Wade

Martin said:
Fair comment. That's something I should have spotted and didn't.

On thinking about it, this code has so far only been used across a
minimum/no delay network, which is doubtless why its been working OK.

Most likely. You can often get away with this "mistake" on local networks, for a
while anyway. I've got that particular T-shirt.
As
I can easily and transparently convert the classes to use a message
length count I'll do just that.

Good idea. It will probably save time in the long run. The last thing you want
is for the code to pass all tests in your local network, and fail on a
production system or on a loaded network.
 
M

Martin Gregorie

Most likely. You can often get away with this "mistake" on local
networks, for a while anyway. I've got that particular T-shirt.

Good idea. It will probably save time in the long run. The last thing
you want is for the code to pass all tests in your local network, and
fail on a production system or on a loaded network.

Job done. Equivalent C functions changed as well to retain
interoperability. A bonus was that I was able to reduce some code
duplication by shifting it into a superclass.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
473,994
Messages
2,570,223
Members
46,815
Latest member
treekmostly22

Latest Threads

Top