S
Stefan Schwarzer
Hi all!
For my FTP library module ftputil [1], some users have asked
for a way to avoid server timeouts (FTP status code 421). But I
haven't found out yet how I can do this in all cases.
I try to explain the problem in more detail. The following is
rather special and probably not so easy to understand, but I'll
do my best. Please ask if you need more information.
ftputil has an FTPHost class (defined in [2]), which can be
instantiated like
# very similar to ftplib.FTP
host = ftputil.FTPHost(ftphost, user, password)
You can use an FTPHost instance to get file(-like) objects from it:
read_from_this_file = host.file("remote_file", "r")
write_to_this_file = host.file("another_remote_file", "wb")
In the background, each call of the FTPHost.file method opens
another connection to the FTP server by using the login data from
the FTPHost instantiation. The return value of each call is a
_FTPFile object (defined in [3]) which wraps a file object
returned by socket.makefile.
My current FTPHost.keep_alive is roughly defined as
# in FTPHost
def keep_alive(self):
# just prevent loss of the connection, so discard the result
self.getcwd()
# refresh also connections of associated file-like objects
for host in self._children:
# host._file is an _FTPFile object (see [3])
host._file.keep_alive()
whereas in _FTPFile it's
# in _FTPFile
def keep_alive(self):
if self._readmode:
# read delegates to the file made from the data transfer
# socket, made with socket.makefile (see [4])
self.read(0)
else:
# write delegates to the file made from the data transfer
# socket, made with socket.makefile (see [4])
self.write("")
self.flush()
In fact, the read method call above on the data transfer channel
keeps the connection open but the call to the write method can't
avoid a timeout from the FTP server (however, I notice this only
when I call _FTPFile.close(), so it seems that no data is sent
until the _FTPFile.close call).
An approach which seems feasible at first is to call pwd() on the
FTP session (an ftplib.FTP class) on which the _FTPFile builds
(similar to the FTPHost.getcwd() call above). Unfortunately, this
doesn't work because as soon as the file is opened, a STOR
command has been sent to the FTP server and it seems I can't send
another FTP command until the data transfer is finished (by calling
_FTPFile.close; see _FTPFile._open in [3] for details of making
the connection).
So to re-phrase my question: How can I keep the connection - for
writing of remote files - from being closed by the FTP server
without requiring the user of the ftputil library to explicitly
send data with _FTPFile.write?
Stefan
[1] http://ftputil.sschwarzer.net/
[2] http://ftputil.sschwarzer.net/trac/browser/trunk/ftputil.py
[3] http://ftputil.sschwarzer.net/trac/browser/trunk/ftp_file.py
[4] http://docs.python.org/lib/socket-objects.html#l2h-2660
For my FTP library module ftputil [1], some users have asked
for a way to avoid server timeouts (FTP status code 421). But I
haven't found out yet how I can do this in all cases.
I try to explain the problem in more detail. The following is
rather special and probably not so easy to understand, but I'll
do my best. Please ask if you need more information.
ftputil has an FTPHost class (defined in [2]), which can be
instantiated like
# very similar to ftplib.FTP
host = ftputil.FTPHost(ftphost, user, password)
You can use an FTPHost instance to get file(-like) objects from it:
read_from_this_file = host.file("remote_file", "r")
write_to_this_file = host.file("another_remote_file", "wb")
In the background, each call of the FTPHost.file method opens
another connection to the FTP server by using the login data from
the FTPHost instantiation. The return value of each call is a
_FTPFile object (defined in [3]) which wraps a file object
returned by socket.makefile.
My current FTPHost.keep_alive is roughly defined as
# in FTPHost
def keep_alive(self):
# just prevent loss of the connection, so discard the result
self.getcwd()
# refresh also connections of associated file-like objects
for host in self._children:
# host._file is an _FTPFile object (see [3])
host._file.keep_alive()
whereas in _FTPFile it's
# in _FTPFile
def keep_alive(self):
if self._readmode:
# read delegates to the file made from the data transfer
# socket, made with socket.makefile (see [4])
self.read(0)
else:
# write delegates to the file made from the data transfer
# socket, made with socket.makefile (see [4])
self.write("")
self.flush()
In fact, the read method call above on the data transfer channel
keeps the connection open but the call to the write method can't
avoid a timeout from the FTP server (however, I notice this only
when I call _FTPFile.close(), so it seems that no data is sent
until the _FTPFile.close call).
An approach which seems feasible at first is to call pwd() on the
FTP session (an ftplib.FTP class) on which the _FTPFile builds
(similar to the FTPHost.getcwd() call above). Unfortunately, this
doesn't work because as soon as the file is opened, a STOR
command has been sent to the FTP server and it seems I can't send
another FTP command until the data transfer is finished (by calling
_FTPFile.close; see _FTPFile._open in [3] for details of making
the connection).
So to re-phrase my question: How can I keep the connection - for
writing of remote files - from being closed by the FTP server
without requiring the user of the ftputil library to explicitly
send data with _FTPFile.write?
Stefan
[1] http://ftputil.sschwarzer.net/
[2] http://ftputil.sschwarzer.net/trac/browser/trunk/ftputil.py
[3] http://ftputil.sschwarzer.net/trac/browser/trunk/ftp_file.py
[4] http://docs.python.org/lib/socket-objects.html#l2h-2660