Is a file in use?

S

Stevie

Very simply I'm sure, but how does one check whether a file is 'in use'
using perl?
Thanks a million
Stevie
 
X

xhoster

Stevie said:
Very simply I'm sure, but how does one check whether a file is 'in use'
using perl?

Not all simple. It depends very much on the OS and/or the file system you
use. Usually you lock the file, but that generally depends on all other
people who put the file "in use" also using locks. You maybe could also
use OS-dependent tools, like lsof on Linux, to see who has the file open.

Xho
 
M

Michael Vilain

Stevie said:
Very simply I'm sure, but how does one check whether a file is 'in use'
using perl?
Thanks a million
Stevie

That's not a portable OS feature. Unless you explicitly lock a file and
have a facility to check for locking, you're stuck with how each OS
checks this feature. Linux and MacOS X will gladly open a file R/O even
though another process has it open and is writing to it. Someone
suggested using lsof on Unix systems, which is _very_ non-portable and
not all systems install lsof "out of the box".

Do you have a specific instance you want to code for? What happens with
the file is busy? Does the open() or the read fail?
 
S

Stevie

OK, Understood your points about portability but thats not an issue
here. I'm running linux and my main concern is to ensure the code
executes as fast as possible. The reason I'm checking for it being
locked is to make sure that it is not being written to by another
process.

Current code is:

system("lsof $file");
if ( $? == 0 ) {
print " success - not locked, exit status = $?\n";
} else {
print " failure - locked, exit status = $?\n";
}

This always returns failure with a exit status of 256. Any ideas why?

Would it be better/faster to try to open the file?
Any suggestions gratefully received.
Stevie
 
M

Michael Vilain

Stevie said:
OK, Understood your points about portability but thats not an issue
here. I'm running linux and my main concern is to ensure the code
executes as fast as possible. The reason I'm checking for it being
locked is to make sure that it is not being written to by another
process.

Current code is:

system("lsof $file");
if ( $? == 0 ) {
print " success - not locked, exit status = $?\n";
} else {
print " failure - locked, exit status = $?\n";
}

This always returns failure with a exit status of 256. Any ideas why?

Would it be better/faster to try to open the file?
Any suggestions gratefully received.
Stevie

This isn't guaranteed to because the time between the completion of the
system() and the next line could have something open the file. What
this boils down to is that there's no real way in the OS to guarantee a
file isn't being written when you open it. UNIX will just let you do it
unless the program that's opening the file takes out a lock on it. The
OS won't do that for you.

Rethink your approach. It won't work.
 
S

Stevie

Thanks for your response. In my case the files in question aren't going
to become in use, they will become not in use. They are being written
to by another process, which, once its completed writing the files,
doesn't touch them again.

I worked out what I was doing wrong (and reading the perldoc -f system
might have helped).

Placing the command in backticks allows you to capture the output. FWIW
test harness follows:

use strict;
use warnings;
use Data::Dumper;

my @ret = `lsof pathtomyfile`;
if (@ret) {
print Dumper \@ret;
print "File is in use, num of lines in output from lsof is
".scalar(@ret)."\n";
} else {
print "Not in use\n";
}

Job done.
 
G

George

Stevie said:
OK, Understood your points about portability but thats not an issue
here. I'm running linux and my main concern is to ensure the code
executes as fast as possible. The reason I'm checking for it being
locked is to make sure that it is not being written to by another
process.

Current code is:

system("lsof $file");
if ( $? == 0 ) {
print " success - not locked, exit status = $?\n";
} else {
print " failure - locked, exit status = $?\n";
}

This always returns failure with a exit status of 256. Any ideas why?

Would it be better/faster to try to open the file?
Any suggestions gratefully received.
Stevie

An old internet joke:

A: because it messes up threading
Q: why would I not reply by top-posting?
This joke succinctly illustrates the problems with top-posting: you see the
answer before you see the question.
 
M

Martijn Lievaart

Would it be better/faster to try to open the file?
Any suggestions gratefully received.

In all processes accessing the file, lock it first.

HTH,
M4
 
U

Uri Guttman

S> Thanks for your response. In my case the files in question aren't going
S> to become in use, they will become not in use. They are being written
S> to by another process, which, once its completed writing the files,
S> doesn't touch them again.

that smells of an XY problem. you are stuck on this lsof type solution
when i suspect the problem is higher up and probably easier to solve.

is this a case of polling to see if a file is new or completely written?
there are better ways to deal with that than lsof.

so why don't you describe the bigger problem instead of the troubles you
are having with your chosen solution?

uri
 
P

Peter J. Holzer

OK, Understood your points about portability but thats not an issue
here. I'm running linux and my main concern is to ensure the code
executes as fast as possible.

lsof is slow.
The reason I'm checking for it being locked is to make sure that it is
not being written to by another process.

You can't do that. Locking and writing are completely orthogonal in
Unix. You can write to a file without locking it. And lsof doesn't test
for locks anyway.
Current code is:

system("lsof $file");
if ( $? == 0 ) {
print " success - not locked, exit status = $?\n";
} else {
print " failure - locked, exit status = $?\n";
}

This always returns failure with a exit status of 256. Any ideas why?

You reversed the test. lsof returns 0 if it finds at least one open
file, and 1 if it finds none. (Also your messages are misleading: lsof
doesn't test if a file is *locked*, just if it is *open*)

hp
 
P

Peter J. Holzer

Indeed. Also because it basically has to check for all processess, if
some hold a file descriptor opened on the wanted file. It amounts to
say that the system keeps track of which process has opened which
file, and that kind of info is fast to recover, but not vice versa, in
which case it's possible to recover the corresponding one, but one has
to lookup for it, and that's slow. Speaking of which I wondered
whether there exist a filesystem that *does* keep track of the reverse
info and provides means to recover it fast by means of a suitable
call.

I would expect the Linux (or any Unix) kernel to keep at least a process
(or file descriptor) count on each opened inode, because it needs to
know whether an inode is in use when it is unlinked. I don't know of
any Unix which makes the information readily available to user programs,
though.

Thinking about it, FAM and friends might be able to do that.

hp
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

Forum statistics

Threads
474,202
Messages
2,571,055
Members
47,658
Latest member
jaguar32

Latest Threads

Top