T
Tim Neukum
i have a three programs that run concurrently: call them daemon, user1,
user2 on a unix machine SunOS 5.9
daemon is long running as the name implies
it reads and writes to a file that represents a queue
something like this:
open (QFILE ">+ $queue_file") or die "daemon ERROR: open: $! died";
flock (QFILE, LOCK_EX) or die "daemon ERROR: flock: $! died";
seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
# process file
truncate(QFILE, 0) or die "daemon ERROR: truncate: $! died";
seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
# rewrite file
at prompt> addToQueue job
adding things to the queue file are implemented with exclusive locking
this functionality works fine and has for some time with
here's my problem.
user1 is a process started in the background by addToQueue that monitors the
job while it is still in the queue
user2 is another process started in the background by addToQueue that
monitors the job while it is still in the queue
both read the queue file but i want to have a stable state of the file when
reading, but i get bad file number when i try to flock
user1 something like this
while (inQueue()) {
#dosomething here
}
user2 something like this
while (inQueue()) {
#do something different here
}
same for both user1 and user2
sub inQueue {
open (INFILE, "< $queue_file") or die "user# ERROR: open: $! died";
flock (INFILE, LOCK_EX) or die "user# ERROR: open: $! died";
<<-----------this is the culprit
seek (QFILE, 0, 0) or die "user# ERROR: seek: $! died";
#check for job
close (INFILE) or die "user# ERROR: close: $! died";
return $found_job
}
QUESTIONS:
i've seen that sysopen may be prefered to open for writing, Why? eg sysopen
(QFILE "$queue_file" O_RDWR) or die...
why can't i request a LOCK_EX in sub inQueue??
if i change inQueue to LOCK_SH....
if i have two processes using LOCK_SH will they attempt to read at the same
time?
will LOCK_EX ever get a chance?? ie will shared locks always succeed if
another process has a shared lock even if another process is was waiting for
an exclusive lock
scenario timeline:
time 0 user1 requests LOCK_SH
time 1 user1 gets LOCK_SH
time 2 daemon requests LOCK_EX
time 3 daemon waits for os to give lock
time 4 user2 requests LOCK_SH
time5 ....does user2 get lock? or does it get in line behind daemon.
if the latter, what the $$$$ is the difference between LOCK_SH and
LOCK_EX????
if the former (ie user2 gets LOCK_SH without waiting) then how can you be
certain of the position, interaction within file stream considering user1
may be reading? very carefully i guess??? does each process get a different
pointer into the file position?
I know this is a lot, but i haven't found anything that sufficiently answers
these questions.
Thanks in advance,
Tim
user2 on a unix machine SunOS 5.9
daemon is long running as the name implies
it reads and writes to a file that represents a queue
something like this:
open (QFILE ">+ $queue_file") or die "daemon ERROR: open: $! died";
flock (QFILE, LOCK_EX) or die "daemon ERROR: flock: $! died";
seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
# process file
truncate(QFILE, 0) or die "daemon ERROR: truncate: $! died";
seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
# rewrite file
at prompt> addToQueue job
adding things to the queue file are implemented with exclusive locking
this functionality works fine and has for some time with
here's my problem.
user1 is a process started in the background by addToQueue that monitors the
job while it is still in the queue
user2 is another process started in the background by addToQueue that
monitors the job while it is still in the queue
both read the queue file but i want to have a stable state of the file when
reading, but i get bad file number when i try to flock
user1 something like this
while (inQueue()) {
#dosomething here
}
user2 something like this
while (inQueue()) {
#do something different here
}
same for both user1 and user2
sub inQueue {
open (INFILE, "< $queue_file") or die "user# ERROR: open: $! died";
flock (INFILE, LOCK_EX) or die "user# ERROR: open: $! died";
<<-----------this is the culprit
seek (QFILE, 0, 0) or die "user# ERROR: seek: $! died";
#check for job
close (INFILE) or die "user# ERROR: close: $! died";
return $found_job
}
QUESTIONS:
i've seen that sysopen may be prefered to open for writing, Why? eg sysopen
(QFILE "$queue_file" O_RDWR) or die...
why can't i request a LOCK_EX in sub inQueue??
if i change inQueue to LOCK_SH....
if i have two processes using LOCK_SH will they attempt to read at the same
time?
will LOCK_EX ever get a chance?? ie will shared locks always succeed if
another process has a shared lock even if another process is was waiting for
an exclusive lock
scenario timeline:
time 0 user1 requests LOCK_SH
time 1 user1 gets LOCK_SH
time 2 daemon requests LOCK_EX
time 3 daemon waits for os to give lock
time 4 user2 requests LOCK_SH
time5 ....does user2 get lock? or does it get in line behind daemon.
if the latter, what the $$$$ is the difference between LOCK_SH and
LOCK_EX????
if the former (ie user2 gets LOCK_SH without waiting) then how can you be
certain of the position, interaction within file stream considering user1
may be reading? very carefully i guess??? does each process get a different
pointer into the file position?
I know this is a lot, but i haven't found anything that sufficiently answers
these questions.
Thanks in advance,
Tim