Unexpected timing results with file I/O

  • Thread starter Steven D'Aprano
  • Start date
S

Steven D'Aprano

After reading an earlier thread about opening and closing lots of files,
I thought I'd do a little experiment.

Suppose you have a whole lot of files, and you need to open each one,
append a string, then close them. There's two obvious ways to do it:
group your code by file, or group your code by procedure.

# Method one: grouped by file.
for each file:
open the file, append the string, then close it


# Method two: grouped by procedure.
for each file:
open the file
for each open file:
append the string
for each open file:
close the file


If you have N files, both methods make the same number of I/O calls: N
opens, N writes, N closes. Which is faster?

Intuitively, the first method has *got* to be faster, right? It's got one
loop instead of three and it doesn't build an intermediate list of open
file objects. It's so *obviously* going to be faster that it is hardly
worth bothering to check it with timeit, right?

Well, I wouldn't be writing unless that intuitive result was wrong. So
here's my test results:


Method 1:
import timeit
names = ['afile' + str(n) for n in range(1000)]
T = timeit.Timer('''for name in names:
.... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
.... ''', 'from __main__ import names')17.391216039657593


Method 2:
.... fp = open(name, 'w'); fp.close()
....
T = timeit.Timer('''files = [open(name, 'a') for name in names]
.... for fp in files:
.... fp.write('xyz\\n')
.... for fp in files:
.... fp.close()
.... ''', '''from __main__ import names''')16.823362112045288


Surprisingly, Method 2 is a smidgen faster, by about half a second over
500,000 open-write-close cycles. It's not much faster, but it's
consistent, over many tests, changing many of the parameters (e.g. the
number of files, the number of runs per timeit test, etc.).

I'm using Linux and Python 2.5.

So, what's going on? Can anyone explain why the code which does more work
takes less time?
 
C

Christian Heimes

Steven said:
So, what's going on? Can anyone explain why the code which does more work
takes less time?

Short answer: CPU and RAM are much faster than hard disks.

The three loops and the creation of a list costs only a few CPU cycles
compared to flushing the new data to disk.

Christian
 
M

Marc 'BlackJack' Rintsch

# Method one: grouped by file.
for each file:
open the file, append the string, then close it


# Method two: grouped by procedure.
for each file:
open the file
for each open file:
append the string
for each open file:
close the file

Method 1:

17.391216039657593

Method 2:

16.823362112045288


Surprisingly, Method 2 is a smidgen faster, by about half a second over
500,000 open-write-close cycles. It's not much faster, but it's
consistent, over many tests, changing many of the parameters (e.g. the
number of files, the number of runs per timeit test, etc.).

I'm using Linux and Python 2.5.

So, what's going on? Can anyone explain why the code which does more work
takes less time?

Can't confirm this (Linux, Python 2.5):

Method 1: 15.380897998809814
Method 2: 18.085366010665894

I guess it's really all about the disk IO as my system monitor applet
shows that almost all of the time is spend in the kernel and very little
in user space.

Ciao,
Marc 'BlackJack' Rintsch
 
R

rdahlstrom

After reading an earlier thread about opening and closing lots of files,
I thought I'd do a little experiment.

Suppose you have a whole lot of files, and you need to open each one,
append a string, then close them. There's two obvious ways to do it:
group your code by file, or group your code by procedure.

# Method one: grouped by file.
for each file:
open the file, append the string, then close it

# Method two: grouped by procedure.
for each file:
open the file
for each open file:
append the string
for each open file:
close the file

If you have N files, both methods make the same number of I/O calls: N
opens, N writes, N closes. Which is faster?

Intuitively, the first method has *got* to be faster, right? It's got one
loop instead of three and it doesn't build an intermediate list of open
file objects. It's so *obviously* going to be faster that it is hardly
worth bothering to check it with timeit, right?

Well, I wouldn't be writing unless that intuitive result was wrong. So
here's my test results:

Method 1:
import timeit
names = ['afile' + str(n) for n in range(1000)]
T = timeit.Timer('''for name in names:

... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
... ''', 'from __main__ import names')>>> min(T.repeat(6, 500))

17.391216039657593

Method 2:

... fp = open(name, 'w'); fp.close()
...>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]

... for fp in files:
... fp.write('xyz\\n')
... for fp in files:
... fp.close()
... ''', '''from __main__ import names''')>>> min(T.repeat(6, 500))

16.823362112045288

Surprisingly, Method 2 is a smidgen faster, by about half a second over
500,000 open-write-close cycles. It's not much faster, but it's
consistent, over many tests, changing many of the parameters (e.g. the
number of files, the number of runs per timeit test, etc.).

I'm using Linux and Python 2.5.

So, what's going on? Can anyone explain why the code which does more work
takes less time?



The code that does more work takes more time. The second one does
quite a bit less work. Think of it like this:

You have 500,000 people to fit through a door. Here are your options:

1. For each person, open the door, walk through the door, then close
the door.
2. Open the door, allow everyone to walk through, then close the
door.

Which one would you say would be a more efficient way to fit 500,000
people through the door?
 
G

Gabriel Genellina

The code that does more work takes more time. The second one does
quite a bit less work. Think of it like this:

You have 500,000 people to fit through a door. Here are your options:

1. For each person, open the door, walk through the door, then close
the door.
2. Open the door, allow everyone to walk through, then close the
door.

Which one would you say would be a more efficient way to fit 500,000
people through the door?

Mmmm, no, the second one should be:

2. Create 500,000 doors and open them.
Make each person enter the room -one at a time- using its own door.
Close each of the 500,000 doors.
 
C

Carl Banks

After reading an earlier thread about opening and closing lots of files,
I thought I'd do a little experiment.
Suppose you have a whole lot of files, and you need to open each one,
append a string, then close them. There's two obvious ways to do it:
group your code by file, or group your code by procedure.
# Method one: grouped by file.
for each file:
open the file, append the string, then close it
# Method two: grouped by procedure.
for each file:
open the file
for each open file:
append the string
for each open file:
close the file
If you have N files, both methods make the same number of I/O calls: N
opens, N writes, N closes. Which is faster?
Intuitively, the first method has *got* to be faster, right? It's got one
loop instead of three and it doesn't build an intermediate list of open
file objects. It's so *obviously* going to be faster that it is hardly
worth bothering to check it with timeit, right?
Well, I wouldn't be writing unless that intuitive result was wrong. So
here's my test results:
Method 1:
import timeit
names = ['afile' + str(n) for n in range(1000)]
T = timeit.Timer('''for name in names:
... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
... ''', 'from __main__ import names')>>> min(T.repeat(6, 500))

Method 2:
... fp = open(name, 'w'); fp.close()
...>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]
... for fp in files:
... fp.write('xyz\\n')
... for fp in files:
... fp.close()
... ''', '''from __main__ import names''')>>> min(T.repeat(6, 500))

Surprisingly, Method 2 is a smidgen faster, by about half a second over
500,000 open-write-close cycles. It's not much faster, but it's
consistent, over many tests, changing many of the parameters (e.g. the
number of files, the number of runs per timeit test, etc.).
I'm using Linux and Python 2.5.
So, what's going on? Can anyone explain why the code which does more work
takes less time?

The code that does more work takes more time. The second one does
quite a bit less work. Think of it like this:

You have 500,000 people to fit through a door. Here are your options:

1. For each person, open the door, walk through the door, then close
the door.
2. Open the door, allow everyone to walk through, then close the
door.

Which one would you say would be a more efficient way to fit 500,000
people through the door?

Bad analogy. A better analogy would be if each person has their own
door to walk through.

My hunch is that is has to do with the OS I/O scheduling. Closing a
file triggers a cache flush, which in turn triggers the I/O to
schedule a write to disk; the OS scheduler is perhaps more efficient
(for a given number of total writes) when it can combine many writes
at the same time.


Carl Banks
 
R

rdahlstrom

After reading an earlier thread about opening and closing lots of files,
I thought I'd do a little experiment.
Suppose you have a whole lot of files, and you need to open each one,
append a string, then close them. There's two obvious ways to do it:
group your code by file, or group your code by procedure.
# Method one: grouped by file.
for each file:
open the file, append the string, then close it
# Method two: grouped by procedure.
for each file:
open the file
for each open file:
append the string
for each open file:
close the file
If you have N files, both methods make the same number of I/O calls: N
opens, N writes, N closes. Which is faster?
Intuitively, the first method has *got* to be faster, right? It's got one
loop instead of three and it doesn't build an intermediate list of open
file objects. It's so *obviously* going to be faster that it is hardly
worth bothering to check it with timeit, right?
Well, I wouldn't be writing unless that intuitive result was wrong. So
here's my test results:
Method 1:
import timeit
names = ['afile' + str(n) for n in range(1000)]
T = timeit.Timer('''for name in names:
... fp = open(name, 'a'); fp.write('xyz\\n'); fp.close()
... ''', 'from __main__ import names')>>> min(T.repeat(6, 500))
17.391216039657593
Method 2:
for name in names: # reset the files to an empty state.
... fp = open(name, 'w'); fp.close()
...>>> T = timeit.Timer('''files = [open(name, 'a') for name in names]
... for fp in files:
... fp.write('xyz\\n')
... for fp in files:
... fp.close()
... ''', '''from __main__ import names''')>>> min(T.repeat(6, 500))
16.823362112045288
Surprisingly, Method 2 is a smidgen faster, by about half a second over
500,000 open-write-close cycles. It's not much faster, but it's
consistent, over many tests, changing many of the parameters (e.g. the
number of files, the number of runs per timeit test, etc.).
I'm using Linux and Python 2.5.
So, what's going on? Can anyone explain why the code which does more work
takes less time?
The code that does more work takes more time. The second one does
quite a bit less work. Think of it like this:
You have 500,000 people to fit through a door. Here are your options:
1. For each person, open the door, walk through the door, then close
the door.
2. Open the door, allow everyone to walk through, then close the
door.
Which one would you say would be a more efficient way to fit 500,000
people through the door?

Bad analogy. A better analogy would be if each person has their own
door to walk through.

My hunch is that is has to do with the OS I/O scheduling. Closing a
file triggers a cache flush, which in turn triggers the I/O to
schedule a write to disk; the OS scheduler is perhaps more efficient
(for a given number of total writes) when it can combine many writes
at the same time.

Carl Banks

The analogy holds. It's faster to open the door, do what you need to
do, then close the door than it is to open and close the door each
time.
 
M

Marc 'BlackJack' Rintsch

The analogy holds. It's faster to open the door, do what you need to
do, then close the door than it is to open and close the door each
time.

It doesn't hold. Read the code again. The total count of "open door" and
"close door" is the same in both cases.

It's

for every person:
open his door; push him through the door; close his door

vs.

for every person:
open his door
for every person:
push him through the door
for every person:
close his door

Ciao,
Marc 'BlackJack' Rintsch
 
R

rdahlstrom

It doesn't matter how many doors opening and closing there are, it
matters the order in which the opening, walking through, and closing
are done. That's my point. In the second example, all of the disk
operations are done at the same time. That's what I meant by people
going through the doors. Maybe it was more clear in my head.
 
M

Marc 'BlackJack' Rintsch

It doesn't matter how many doors opening and closing there are, it
matters the order in which the opening, walking through, and closing
are done. That's my point. In the second example, all of the disk
operations are done at the same time. That's what I meant by people
going through the doors. Maybe it was more clear in my head.

But my timing shows that method two is slower on my computer. So there is
no obvious winner.

Ciao,
Marc 'BlackJack' Rintsch
 
S

Steven D'Aprano

Can't confirm this (Linux, Python 2.5):

Method 1: 15.380897998809814
Method 2: 18.085366010665894

Hmmm... does your system use software RAID? Mine does. I wonder if that's
a relevant factor?
I guess it's really all about the disk IO as my system monitor applet
shows that almost all of the time is spend in the kernel and very little
in user space.

I wouldn't be surprised if it was something to do with the OS caching
writes to disk. And saying that is really just me doing a lot of hand-
waving and saying "it's magic of what we know naught".
 
M

Marc 'BlackJack' Rintsch

Hmmm... does your system use software RAID? Mine does. I wonder if that's
a relevant factor?

No, test ran on a plain reiserfs partition.

Ciao,
Marc 'BlackJack' Rintsch
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,825
Latest member
VernonQuy6

Latest Threads

Top