So you can't accidentally use a variable on line 53403 that you
declared on line 2, when you meant to use the similarly-named variable
that you declared on line 53400.
Also so that you don't have to trace through 53401 lines of code
looking for every instance of the variable to see what might have
happened to it. Just look at the top of the inner-most block and
you'll see where it was declared and everything that could have
happened to it before the relevant line.
Will it change the behavior of a "working" program? Nope. This tip is
intended to make it easier to find bugs when you introduce them.
1) Global barewords are just that - global. If you use DH on line 30,
and while you're reading from it, use a function that's defined on line
400, and that function also opens a directory with the DH handle, you
will have blown away any possibility of reading this original
directory.
2) Global barewords are not subject to strict. If you name your handle
"MY_DIR0", and at one point typo it as "MY_DIRO", strict will not
complain to you. Instead, you'll get a very confusing "read on closed
directory handle" warning, or similar, and spend valuable time wasted
trying to figure out when you closed your directory handle.
3) Global bareword handles cannot be used in recursive subroutines,
because they are non-rentrant. The second, third, etc times a function
is entered, the function will attempt to open a directory with the
existing directory handle, again clobbering your original read.
What difference would that have to the operation of the program?
None - until your maintenence programmer has to read your code and
figure out what the hell you're doing.
Surely this is personal preference?
Yes, and I have yet to meet any person anywhere who prefers confusing
and unreadable code to clear and readable. You're basically just lying
to anyone who reads your code, including yourself. You have a variable
named @files, but you don't put files into it? Why? What is gained by
this misdirection?
What difference would that have to the operation of the program? Sure
this too is personal preference?
Yup. See above.
See also:
perldoc perlstyle
This is another problem that wasn't pointed out to you. You're
ignoring one line from the file. $#files contains the last index of
the array @files. When @files contains only one element, $#files will
be equal to 0. That should say
while ($#files > -1)
g> {
g> $rand = int(rand(@files));
g> print SAVETO $files[$rand];
g> delete $files[$rand];
here is your big bug. that doesn't do what you think it does. have you
run this code? my turing machine says it will never stop.
mine says it should "most likely" stop, since *eventually* the last
element will probably be chosen and deleted, and then *eventually* the
new last element will probably be chosen and deleted, etc.
Ran fine on my machine, would you care to explain further?
According to the docs, delete will only undefine an element of an
array, not remove it. EXCEPT when dealing with the end of the array.
If you delete the last element, the array will indeed shrink. So yes,
this will (most likely) eventually go through all the files. However,
after "deleting" $files[3], for example, your program can then again
choose 3 and attempt to print the now undefined element. Very
inefficient.
Actually, seeing this problem brought to light two realizations I
hadn't had before:
1) The array will shrink down to the last element for which exists()
still returns true
2) A delete'd element returns a false value for exists() (which is
documented in `perldoc -f delete` but seems very contradictory to
`perldoc -f exists`, IMO).
End result - yes, this program will (most likely) "work", but it is
very inefficient and likely to produce warnings as you are allowing the
possibility of choosing and printing an undefined value.
Not quite true. It does remove an element if its the *last* element.
Hope this helps clarify why you were suggested to do some of these
'best practices'
Paul Lalli