I think this is a poor design. The server should:
- wait for incoming connections from clients
- when a connection is received set up context (if needed)
for the client
- process requests from the client, generating at least one response
per request
- when the connection closes discard any client-specific
context.
- wait for the next connection.
The client should:
- open a connection to the server
- send requests to the server
- read server response(s) to each request
- close the connection when it's done.
IMO the server should NEVER intentionally close a connection: as you've
seen this can cause problems. The method I outlined is much cleaner:
- the client knows when it has finished talking to the server and so
can close the connection.
- in this scheme any connection closures seen by the client are
ALWAYS an error.
- the logic of this scheme means that the server will be waiting for
a new request from the client when the connection is closed and so
will be ready to handle it or close the connection without needing
to disentangle incomplete processing.
- designing the protocol so that every client request generates at
least one server response makes error checking easy (the client
always gets a response or sees the connection close due to an error.
If a simple ACK response is short, the overheads are minimal.
If you design the protocol so that the messages contain text then use of
a packet sniffer is a lot easier. If you add debugging code that prints
all messages sent and received then you don't need a packet sniffer and
debugging process
rocess connections within a single computer is
simple. Specifying the protocol in terms of formatted records and using
record buffers to handle the messages rather than raw
streams also simplifies the protocol logic and helps a lot with
debugging. I normally design messages as length-delimited records
containing comma separated fields, but ymmv.
Lastly, the read/write loop on your client is probably a bad idea.
Unless your client is quite unusual this just adds complexity without
improving throughput. It also chews up CPU with its polling loop. To me
it smacks of inappropriate optimization: queues and scan loops should
only be introduced if monitoring code in the client shows that simple
"write - wait for response - read" logic is positively identified as a
bottleneck.
Why? A server should be written to service multiple clients which can
connect and disconnect while the server continues to run. If you want to
stop it, use a dead simple client that connects, sends STOP, waits for
OK and then disconnects. Besides, such a client is often useful for
seeing what the server is doing, getting statistics, etc.
Disagree! How many client/server designs had your adviser implemented
successfully?
Sockets work fine for both C and Java if your message exchange protocol
is a clean design.