T
Thr
Hi,
I read some books about network programming..
Still, I have some questions:
1. let's assume that we have http server written using nonblocking i/o
and select().
How is it possible to server many requests simultaneously?
I mean - we have, for example, static binary file that is 700 MB long
(and many other, much smaller files, that are, for example, html
webpages)
..
Let's assume that server found new socket descriptor with select(), and
accepted it..
after receiving data from client, and after parsing request header we
found that he want this big file, and we started to send him chunks of
data..
I think that this situation should block sending data for other
clients, does it (until this 700 MB file is send) ?
If not, how is it possible to serve simultaneously many connections ?
"Binding" file descriptor with open file to accepted socket descriptor
with struct{} and queuing some chunks of data with fifo, and then -
iterating to the next socket descriptor returned by select()?
I read some books about network programming..
Still, I have some questions:
1. let's assume that we have http server written using nonblocking i/o
and select().
How is it possible to server many requests simultaneously?
I mean - we have, for example, static binary file that is 700 MB long
(and many other, much smaller files, that are, for example, html
webpages)
..
Let's assume that server found new socket descriptor with select(), and
accepted it..
after receiving data from client, and after parsing request header we
found that he want this big file, and we started to send him chunks of
data..
I think that this situation should block sending data for other
clients, does it (until this 700 MB file is send) ?
If not, how is it possible to serve simultaneously many connections ?
"Binding" file descriptor with open file to accepted socket descriptor
with struct{} and queuing some chunks of data with fifo, and then -
iterating to the next socket descriptor returned by select()?