How to force fscanf to find only data on a single input line?

M

Malcolm McLean

Richard Tobin said:
Try timing the following program with arguments suitable to the amount
of real memory you have. It doesn't show any sign of super-linearity
on my system.

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv)
{
int m = atoi(argv[1]);
int i;
void *buf = 0;

fprintf(stderr, "please wait\n");

for(i=1; i<=m; i++)
if(!(buf = realloc(buf, i)))
{
fprintf(stderr, "realloc(%d) failed\n", i);
return 1;
}

return 0;
}
But what happens once you fragment the heap a little, by performing other
allocations? You can count the copies, incidentally, simply by testing if
the return pointer equals the input pointer.
 
R

Richard Tobin

Malcolm McLean said:
But what happens once you fragment the heap a little, by performing other
allocations?

What do you think happens?

Possibly each reallocation takes a little longer, depending on the
allocator. I suppose it's possible that there's some first-fit
allocator out there where it changes the number of reallocations, but
I very much doubt it. Can you think of a plausible implementation
that makes it not O(N)?

-- Richard
 
C

CBFalconer

Malcolm said:
.... snip ...

As it stands ggets() calls realloc repeatedly with increments of
128 bytes. realloc() generally performs internal copying - I read
somewhere that it is rare for actual implementations to extend the
block, though I suppose with an increment as small as 128 bytes
that might happen more often than not.

And that is so arranged because the _normal_ usage is to acquire
inter-active lines. These are not expected to be especially large,
and will probably normally not exceed the initial 112 byte
assignment, and almost never exceed the 1st reallocation of 240
bytes. However the system CAN copy with very large requirements,
although not especially quickly. Again, such speed is totally
useless in interactive work. This was all selected quite
deliberately, it didn't just happen.
 
M

Malcolm McLean

Richard Tobin said:
What do you think happens?

Possibly each reallocation takes a little longer, depending on the
allocator. I suppose it's possible that there's some first-fit
allocator out there where it changes the number of reallocations, but
I very much doubt it. Can you think of a plausible implementation
that makes it not O(N)?
Best fit allocator. Imagine we've got a block of one, two, three, four, five
.... bytes. When we ask for one byte it slots us into the one, to conserve
the bigger blocks. Then into the two, recopying, the three, and so on.
If we've just got one block of 100 then it gives us the start of that, and
extends.

Mine makes 134 reallocations for N = 1million, and 2335 for N = 10 million.
I've got about 1 GB installed. So mine's showing superlinear speedup on
moderate loading, at least. A run on 1 GB hasn't finished yet. At about 20
million bytes and 4000 reallocations it is visibly slowing. I suspect your
system has a nice uniform block for realloc() to play at whilst mine is
slotting memory in with other programs.
 
M

Malcolm McLean

Malcolm McLean said:
Best fit allocator. Imagine we've got a block of one, two, three, four,
five ... bytes. When we ask for one byte it slots us into the one, to
conserve the bigger blocks. Then into the two, recopying, the three, and
so on.
If we've just got one block of 100 then it gives us the start of that, and
extends.

Mine makes 134 reallocations for N = 1million, and 2335 for N = 10
million. I've got about 1 GB installed. So mine's showing superlinear
speedup on moderate loading, at least. A run on 1 GB hasn't finished yet.
At about 20 million bytes and 4000 reallocations it is visibly slowing. I
suspect your system has a nice uniform block for realloc() to play at
whilst mine is slotting memory in with other programs.
Duh. By "reallocations" I mean "moves" of course. The program make N calls
to realloc().
 
R

Richard Tobin

Malcolm McLean said:
Best fit allocator. Imagine we've got a block of one, two, three, four, five
... bytes. When we ask for one byte it slots us into the one, to conserve
the bigger blocks. Then into the two, recopying, the three, and so on.
If we've just got one block of 100 then it gives us the start of that, and
extends.

This seems like a very poor way to implement realloc(). If you call
realloc(), you are probably growing the buffer, so best fit is
inappropriate. Expanding by a factor and then doing best fit would be
more reasonable.
Mine makes 134 reallocations for N = 1million, and 2335 for N = 10 million.

How do you come to have 2335-134=2201 differently-sized blocks blocks
of between 1 and 10 million bytes lying around? Are you splitting
bits of them for other allocations? Again, this seems like a very
poor allocator if it does that many reallocations for a growing
buffer.

What system is this?
I've got about 1 GB installed. So mine's showing superlinear speedup on
moderate loading, at least. A run on 1 GB hasn't finished yet. At about 20
million bytes and 4000 reallocations it is visibly slowing. I suspect your
system has a nice uniform block for realloc() to play at whilst mine is
slotting memory in with other programs.

Other progams? Are you not using a system with virtual memory?

-- Richard
 
M

Malcolm McLean

Richard Tobin said:
How do you come to have 2335-134=2201 differently-sized blocks blocks
of between 1 and 10 million bytes lying around? Are you splitting
bits of them for other allocations? Again, this seems like a very
poor allocator if it does that many reallocations for a growing
buffer.

What system is this?
This is the Windows Vista freebie C Express compiler.
I am not such an inferior being, really. I have a nice Beowulf cluster at
university. However currently I have a week off.
Other progams? Are you not using a system with virtual memory?
I would hope that pointers are not raw chip addresses. However exactly how
Vista chops up memory between processes I couldn't tell you. Obviously there
must be somethign else in the system, or if you have only one allocated
block on the heap there wouldn't seem to be much call to move it. Unless it
puts blocks at the top, which I wouldn't put past it. It's a horrid
compiler. Nothing works.
 
M

Malcolm McLean

CBFalconer said:
And that is so arranged because the _normal_ usage is to acquire
inter-active lines. These are not expected to be especially large,
and will probably normally not exceed the initial 112 byte
assignment, and almost never exceed the 1st reallocation of 240
bytes. However the system CAN copy with very large requirements,
although not especially quickly. Again, such speed is totally
useless in interactive work. This was all selected quite
deliberately, it didn't just happen.
Sure. But this is production code, not teaching code.
Someone could tie up system resources for a long period, on my system if not
on Richard Tobin's far superior one, by piping a maliciously long line to
stdin. If you can fix that without any other implications, such as limiting
line length or complicating the interface, you should do it. The penalty is
only a couple of extra lines of code, to increment delta by 10% on each
pass, maybe after the first two or three expansions, if you want to maintain
the 128-size block for speed of normal lines.
 
C

CBFalconer

Richard said:
What do you think happens?

Possibly each reallocation takes a little longer, depending on the
allocator. I suppose it's possible that there's some first-fit
allocator out there where it changes the number of reallocations, but
I very much doubt it. Can you think of a plausible implementation
that makes it not O(N)?

Take a look at nmalloc, for DJGPP. There are several prime points
about it, which is why I wrote it in the first place.

1. Avoiding long searches to combine blocks on free.
2. Avoiding wasting time copying memory on realloc.
3. Rapid resolution of malloc calls.
4. A complete debugging package for the user (added).

All of these have been arranged to have O(1) performance in number
of allocated blocks. That performance, for free, prevents long
hiatuses when releasing large numbers of allocated blocks. The
free action reduces (but does not eliminate) memory blockage due to
fragmentation. The realloc action minimizes CPU time for realloc.
The code is not (can't be) fully portable, but it is close. I have
not found any bugs.

<http://cbfalconer.home.att.net/download/nmalloc.zip>
 
R

Richard Tobin

CBFalconer said:
And that is so arranged because the _normal_ usage is to acquire
inter-active lines. These are not expected to be especially large,
and will probably normally not exceed the initial 112 byte
assignment, and almost never exceed the 1st reallocation of 240
bytes.

I would have thought it was just as useful for non-interactive reading
of files. I often encounter text files with absurdly long lines
(megabytes) - but these are XML files, and are normally read with an
XML parser, so I suppose it's not a compelling example.

-- Richard
 
I

Ian Collins

Malcolm said:
Sure. But this is production code, not teaching code.

Doesn't matter, it is a good choice for most situations, libraries are
full of compromises.
Someone could tie up system resources for a long period, on my system if
not on Richard Tobin's far superior one, by piping a maliciously long
line to stdin.

Most systems would be I/O bound (where do you keep the long line?), not
CPU bound in this case.
 
K

Kelsey Bjarnason

[snips]

grows exponentially. This is the standard schoolboy question, if you
invested a penny at 10% interest, applied annually, what would it be worth
in 100 years' time? The answer is rather a large sum.

Indeed? Hmm. Should be simple enough to code...

x = 0.01

loop 100 times, x *= 1.10 at each iteration...

result is something less than $140. Hardly what I'd call "rather a large
sum" for a century of investing.
 
R

Richard

Kelsey Bjarnason said:
[snips]

grows exponentially. This is the standard schoolboy question, if you
invested a penny at 10% interest, applied annually, what would it be worth
in 100 years' time? The answer is rather a large sum.

Indeed? Hmm. Should be simple enough to code...

x = 0.01

loop 100 times, x *= 1.10 at each iteration...

result is something less than $140. Hardly what I'd call "rather a large
sum" for a century of investing.

It is if you start with more than 1 cent. And it is if, for you, a penny
is a lot of money.
 
J

Joe Wright

Kelsey said:
[snips]

grows exponentially. This is the standard schoolboy question, if you
invested a penny at 10% interest, applied annually, what would it be worth
in 100 years' time? The answer is rather a large sum.

Indeed? Hmm. Should be simple enough to code...

x = 0.01

loop 100 times, x *= 1.10 at each iteration...

result is something less than $140. Hardly what I'd call "rather a large
sum" for a century of investing.

But imagine you were able to invest $100. The wonder of compound
interest at 10% would yield $1,378,061.23 or so.
 
K

Kelsey Bjarnason

Kelsey Bjarnason said:
[snips]

grows exponentially. This is the standard schoolboy question, if you
invested a penny at 10% interest, applied annually, what would it be worth
in 100 years' time? The answer is rather a large sum.

Indeed? Hmm. Should be simple enough to code...

x = 0.01

loop 100 times, x *= 1.10 at each iteration...

result is something less than $140. Hardly what I'd call "rather a large
sum" for a century of investing.

It is if you start with more than 1 cent.

Which you don't, as the statement was about, I quote, "a penny". Not a C
note. Not a million. A penny.
 
R

Richard

Kelsey Bjarnason said:
Kelsey Bjarnason said:
[snips]

On Fri, 31 Aug 2007 14:48:31 +0100, Malcolm McLean wrote:

grows exponentially. This is the standard schoolboy question, if you
invested a penny at 10% interest, applied annually, what would it be worth
in 100 years' time? The answer is rather a large sum.

Indeed? Hmm. Should be simple enough to code...

x = 0.01

loop 100 times, x *= 1.10 at each iteration...

result is something less than $140. Hardly what I'd call "rather a large
sum" for a century of investing.

It is if you start with more than 1 cent.

Which you don't, as the statement was about, I quote, "a penny". Not a C
note. Not a million. A penny.

Are you as pedantic and petty in real life?

Why did you snip my reply?

The bit which suggests that 140 quid is a lot to a little boy who thinks
investing one penny is a big thing?
 
J

Joe Wright

Richard said:
Kelsey Bjarnason said:
[snips]

On Fri, 31 Aug 2007 14:48:31 +0100, Malcolm McLean wrote:

grows exponentially. This is the standard schoolboy question, if you
invested a penny at 10% interest, applied annually, what would it be worth
in 100 years' time? The answer is rather a large sum.
Indeed? Hmm. Should be simple enough to code...

x = 0.01

loop 100 times, x *= 1.10 at each iteration...

result is something less than $140. Hardly what I'd call "rather a large
sum" for a century of investing.
It is if you start with more than 1 cent.
Which you don't, as the statement was about, I quote, "a penny". Not a C
note. Not a million. A penny.

Are you as pedantic and petty in real life?

Why did you snip my reply?

The bit which suggests that 140 quid is a lot to a little boy who thinks
investing one penny is a big thing?

I see some confusion among cent, the 100th part of a US dollar and
penny, the 240th part of an old UK pound. Remember? Twelve pence to a
shilling, twenty shillings to a pound? The 100th part of the new pound
suffers the name of pee.

I once bought a pint of bitter in a Croyden pub for one and six.
 
R

Richard Heathfield

Joe Wright said:
Richard wrote:


I see some confusion among cent, the 100th part of a US dollar and
penny, the 240th part of an old UK pound. Remember? Twelve pence to a
shilling, twenty shillings to a pound? The 100th part of the new pound
suffers the name of pee.

If you assume proper pennies, the answer reduces to just under 57.42.

Any child capable of performing a 100-year compound interest calculation
is also very likely to be capable of realising that it is a
hypothetical exercise, for at least one of several reasons:

(a) very, very few people old enough to open a bank account (or to have
one opened for them) will still be alive 100 years thereafter;
(b) because of the exigencies and vicissitudes of normal life, the money
is very unlikely to remain in the account, unscathed, for an entire
century;
(c) finding a low-risk high-yield account is difficult - getting a 10%
interest rate is very, very difficult indeed - you'd be hard-pressed to
get that rate with a principal of thousands of pounds, let alone with a
single penny.
 
R

Richard

Richard Heathfield said:
Joe Wright said:


If you assume proper pennies, the answer reduces to just under 57.42.

Any child capable of performing a 100-year compound interest calculation
is also very likely to be capable of realising that it is a
hypothetical exercise, for at least one of several reasons:

(a) very, very few people old enough to open a bank account (or to have
one opened for them) will still be alive 100 years thereafter;
(b) because of the exigencies and vicissitudes of normal life, the money
is very unlikely to remain in the account, unscathed, for an entire
century;
(c) finding a low-risk high-yield account is difficult - getting a 10%
interest rate is very, very difficult indeed - you'd be hard-pressed to
get that rate with a principal of thousands of pounds, let alone with a
single penny.

Off Topic. Please take it to a newsgroup which discusses bank
accounts. We do not discuss this stuff here.

blah blah blah blah
 
K

Kelsey Bjarnason

[snips]

Are you as pedantic and petty in real life?

I try to be, yes, when examining claims of this sort.
Why did you snip my reply?

Because it added nothing relevant.
The bit which suggests that 140 quid is a lot to a little boy who thinks
investing one penny is a big thing?

A 100+ year old little boy? Indeed. Or do you mean Great Grandpa should
invest a penny so the kid three generations down the line can inherit a
hair over a C note? Not a very smart old coot, was he?
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top