ofstream random access not working / developer error

L

Lars Uffmann

Hi everyone!

It's probably my fault, thus the developer error in the subject, but I'm
having some troubles with ofstream. I am trying to randomly access an
output file - I want to write an index for a larger datafile in here,
with the index file holding - in the correct order - file pointers &
length information for blocks in a big data file. Later on I'll rewrite
the big file from the information in the index file and the data in the
data file.
The rewriting works fine, but when I use the following code to enlarge
the index file (padding with 0), it will either ignore the data written
to the index file in a previous function call (when opened without
ios_base::app), or ignore my attempts to write somewhere in the file
using seekp() - this happens when I open the file with ios_base::app.

(code below)

If you can point me to my error (I have tried basically all combinations
of open modes that seemed to make sense) - or tell me that ofstream is
simply not suited for this (it should be, I think) - thanks a lot!

Best Regards,

Lars

I'll try to break the code down - let's assume buf is defined big enough:
/**/
struct myStruct {
streampos pos;
unsigned int length;
};

#define BUFSIZE 10000

void writeToIndexFile (const char *filename, int datasetNo, myStruct *s)
{
ofstream indexFile;
int filesize, missingBytes;
char buf[BUFSIZE];

indexFile.open (filename, ios_base::binary);
indexFile.seekp (0, ios_base::end); // move to eof position

filesize = indexFile.tellp(); // get putpointer @ eof position

// determine how big the file SHOULD be in order to be able to
// write s to the fixed position (datasetNo-1)*sizeof(myStruct)
missingBytes = (datasetNo - 1)*sizeof (myStruct) - filesize;

if (missingBytes > 0) { // if file not big enough
memset (buf, 0, sizeof(buf)); // make sure only 0 gets appended
indexFile.write (buf, missingBytes); // append just enough bytes
}

indexFile.seekp ((datasetNo-1)*sizeof(myStruct), ios_base::beg);
indexFile.write ((char *) s, sizeof (myStruct));

indexFile.close();
}
/**/
 
L

Lars Uffmann

for the record:

the very same function code, using cstdio and FILE functions (fopen,
fseek, ftell, fwrite, fclose) works just as intended...

So is it a limitation of fstream that you do not get random access
functionality?
 
K

kasthurirangan.balaji

for the record:

the very same function code, using cstdio and FILE functions (fopen,
fseek, ftell, fwrite, fclose) works just as intended...

So is it a limitation of fstream that you do not get random access
functionality?

fstream limitations are directly proportional to FILE limitations.
After all fstream is wrapper(facade) over FILE utilities and
templated.

For you case, why don't you replace ofstream with fstream???do not
forget to include <fstream>

ofstream indexFile ---> fstream indexFile

Thanks,
Balaji.
 
L

Lars Uffmann

fstream limitations are directly proportional to FILE limitations.
After all fstream is wrapper(facade) over FILE utilities and
templated.
Hmm - something's been implemented badly then :/
For you case, why don't you replace ofstream with fstream???do not
forget to include <fstream>

ofstream indexFile ---> fstream indexFile

Tried it - same problem - I cannot randomly access an existing file.
I guess I'll have to work with FILE utilities then.

Thanks anyways!

Lars
 
K

kasthurirangan.balaji

Hmm - something's been implemented badly then :/



Tried it - same problem - I cannot randomly access an existing file.
I guess I'll have to work with FILE utilities then.

Thanks anyways!

    Lars

I guess there is some problem with your code only. The below code
works fine.


#include <fstream>
#include <iostream>
#include <cstring>

void write();
void read();

void write()
{
std::fstream fstr("new");

fstr.seekp(25,std::ios_base::beg);
fstr.write("thank you",9);
}

void read()
{
std::fstream fstr("new");
char str[10];

fstr.seekg(25,std::ios_base::beg);
fstr.read(str,9);
std::cout << str << '\n';
}


main()
{
write();
read();
}

file "new" contents

"0","balaji","18","1979"
thank youji","18","1979"
"0","balaji","18","1979"
"0","balaji","18","1979"
"0","balaji","18","1979"
"0","balaji","18","1979"
"0","balaji","18","1979"
"0","balaji","18","1979"
"0","balaji","18","1979"

Compiled and tested under aix 5.3 xlC v7.0 32 bit.

Thanks,
Balaji.
 
L

Lars Uffmann

Hi Balaji,

I guess there is some problem with your code only. The below code
works fine.

Works fine as long as the file already exists. Try this:

int main()
{
std::fstream fstr("new.txt", ios_base::in | ios_base::eek:ut |
ios_base::binary);

if (!fstr.is_open()) {
cout << "creating file" << endl;
fstr.open ("new.txt", ios_base::binary | ios_base::eek:ut);
if (!fstr.is_open()) cout << "file still not open" << endl;
}

fstr.seekp (0, std::ios_base::end);
cout << "position in file = " << fstr.tellp() << endl;

fstr.close();

return 0;
}

This won't work if the file doesn't exist. It WILL create the file (file
still not open message will NOT appear), but nevertheless, tellp() will
still tell us the invalid put pointer position of -1. In my application,
I must not overwrite the contents of the file, if it exists, but if it
doesn't, I want to create it.
Subsequent calls of fstr.open / fstr.close / fstr.open won't work,
because the 2nd open will sometimes (more like always) fail to find the
newly created file - probably a disk caching problem.

Even the simple lines
fstr.open ("new.txt", ios_base::binary | ios_base::eek:ut);
fstr << "1";
will fail. The "1" simply won't BE in the newly created file.

This is on g++ (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)
- Windows XP machine (yes, I know)

May have to open another thread for this much easier problem - in case
noone can solve this ;)

Is there any way to make SURE a file stream (fstream) is actually ready
for output, before writing data to it?

Best Regards,

Lars
 
K

kasthurirangan.balaji

Hi Balaji,



Works fine as long as the file already exists. Try this:

int main()
{
        std::fstream fstr("new.txt", ios_base::in | ios_base::eek:ut |
ios_base::binary);

        if (!fstr.is_open()) {
                cout << "creating file" << endl;
                fstr.open ("new.txt", ios_base::binary | ios_base::eek:ut);
                if (!fstr.is_open()) cout << "file still not open" << endl;
        }

        fstr.seekp (0, std::ios_base::end);
cout << "position in file = " << fstr.tellp() << endl;

        fstr.close();

        return 0;

}

This won't work if the file doesn't exist. It WILL create the file (file
still not open message will NOT appear), but nevertheless, tellp() will
still tell us the invalid put pointer position of -1. In my application,
I must not overwrite the contents of the file, if it exists, but if it
doesn't, I want to create it.
Subsequent calls of fstr.open / fstr.close / fstr.open won't work,
because the 2nd open will sometimes (more like always) fail to find the
newly created file - probably a disk caching problem.

Even the simple lines
    fstr.open ("new.txt", ios_base::binary | ios_base::eek:ut);
    fstr << "1";
will fail. The "1" simply won't BE in the newly created file.

This is on g++ (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)
- Windows XP machine (yes, I know)

May have to open another thread for this much easier problem - in case
noone can solve this ;)

Is there any way to make SURE a file stream (fstream) is actually ready
for output, before writing data to it?

Best Regards,

    Lars

You may want to try this

std::fstream fstr("new.txt", ios_base::in | ios_base::eek:ut |
ios_base::binary | ios_base::app);

Also, standard c++ streams are buffered - means actual write won't
happen under buffer is full. Use flush to flush-off data.

Thanks,
Balaji.
 
J

James Kanze

It's probably my fault, thus the developer error in the
subject, but I'm having some troubles with ofstream. I am
trying to randomly access an output file - I want to write an
index for a larger datafile in here, with the index file
holding - in the correct order - file pointers & length
information for blocks in a big data file. Later on I'll
rewrite the big file from the information in the index file
and the data in the data file.
The rewriting works fine, but when I use the following code to
enlarge the index file (padding with 0), it will either ignore
the data written to the index file in a previous function call
(when opened without ios_base::app), or ignore my attempts to
write somewhere in the file using seekp() - this happens when
I open the file with ios_base::app.

ios_base::app forces all writes to be at the end of the file.
You definitely don't want to use it if you're using random
access.
(code below)
If you can point me to my error (I have tried basically all
combinations of open modes that seemed to make sense) - or
tell me that ofstream is simply not suited for this (it should
be, I think) - thanks a lot!

My first question would be: how portable do you need to be?
You've made a couple of assumptions that the standard doesn't
guarantee, but which do often hold.
I'll try to break the code down - let's assume buf is defined big enough:
/**/
struct myStruct {
streampos pos;

Attention: streampos is a fairly complex type, containing
information about state in the case of multibyte encodings. It
cannot, generally be manipulated by memcpy or other code which
behaves similarly.

If you're dealing with binary data, you probably want to use
some sort of basic type here---long or long long---and convert
the streampos to and from it. (For files with multibyte
encodings, this will cause a loss of information.)
unsigned int length;
};
#define BUFSIZE 10000
void writeToIndexFile (const char *filename, int datasetNo, myStruct *s)
{
ofstream indexFile;
int filesize, missingBytes;
char buf[BUFSIZE];
indexFile.open (filename, ios_base::binary);

This line normally should truncate the file, causing any
previous contents to be lost.

The open modes in C++ are mapped to the open modes of fopen,
which results in some rather annoying limitations with regards
to the semantics available. In particular, it is impossible to
open an existing file exclusively for writing without either
destroying its contents, or forcing all writes to take place at
the end of file. To perform writes at random positions in an
existing file, it is necessary to open it with ios_base::in |
ios_base::eek:ut | ios_base::binary.
indexFile.seekp (0, ios_base::end); // move to eof position
filesize = indexFile.tellp(); // get putpointer @ eof position

Formally, there's no guarantee that a streampos can be
implicitly converted to an integral type. There's also no
guarantee that the resulting numeric value has any meaning as a
numeric value if it does convert---an implementation might, for
example, implement it with a sector number in the low order
bits, and the offset in the sector in the high order bits.

Practically, I think you're pretty safe with any compiler under
Windows or one of the mainstream Unixes. (Provided the file is
opened in binary mode under Windows, of course.)

Of course, with the open mode you've used, the file was
truncated on open, so the current position should always be 0.
// determine how big the file SHOULD be in order to be able to
// write s to the fixed position (datasetNo-1)*sizeof(myStruct)
missingBytes = (datasetNo - 1)*sizeof (myStruct) - filesize;
if (missingBytes > 0) { // if file not big enough
memset (buf, 0, sizeof(buf)); // make sure only 0 gets appended
indexFile.write (buf, missingBytes); // append just enough bytes

I seem to recall hearing that after seeking to the end of file,
some implementations set eofbit. If so, the write will fail.
I'd tend to throw in an "indexFile.clear()" before the write,
just to be sure.

And of course, you definitely want to check whether the write
succeeded---it will also fail, for example, if the disk is full.
indexFile.seekp ((datasetNo-1)*sizeof(myStruct), ios_base::beg);
indexFile.write ((char *) s, sizeof (myStruct));

Again, you're missing some important error handling.
indexFile.close();
}

The only real problem I see is with your open mode, at least
with most implementations. Try using an fstream (rather than an
ofstream), and opening with in | out | binary. (This should
result in the semantics of fopen( ..., "r+b" ), which seems to
be what you need.)

And the missing error handling. If any of the requests fails
for some reason, the behavior will be difficult to explain
without knowing which one failed.
 
J

James Kanze

for the record:
the very same function code, using cstdio and FILE functions (fopen,
fseek, ftell, fwrite, fclose) works just as intended...
So is it a limitation of fstream that you do not get random access
functionality?

The semantics of fstream are defined in terms of FILE*. The
problem is only to find the corresponding mapping of the open
modes. Which open mode did you use with fopen?

(FWIW: I tend to do this sort of thing at the system level,
using the Posix functions. But I'm rarely concerned about
portability beyond Unix, and I usually need some features not
supported by FILE* or fstream, such as full synchronization.)
 
J

James Kanze

fstream limitations are directly proportional to FILE limitations.
After all fstream is wrapper(facade) over FILE utilities and
templated.

That's not true, at least not for the implementations I'm
familiar with. The fstream semantics are defined in terms of
FILE* semantics, but in most cases, the actual implementation
doesn't use FILE*.
 
L

Lars Uffmann

You may want to try this
std::fstream fstr("new.txt", ios_base::in | ios_base::eek:ut |
ios_base::binary | ios_base::app);

That's where my problems with fstream start *g*
According to fstream::eek:pen documentation, what ios_base::eek:ut flag does is:
(append) Set the stream's position indicator to the end of the stream
before each output operation.

However, I still want to be able to write into the middle of the file.

Edit: Pinpointed the problem, posting as reply to my original post.
Also, standard c++ streams are buffered - means actual write won't
happen under buffer is full. Use flush to flush-off data.

That wasn't the problem. :)

Best Regards,

Lars
 
L

Lars Uffmann

Lars said:
It's probably my fault, thus the developer error in the subject, but I'm
having some troubles with ofstream.

Okay, for a change, it wasn't my fault. Here's what I was trying to do,
and what actually happened:

I wanted to open a file stream for input/output operations and if that
failed (assuming that's because the file doesn't yet exist) open it for
output/append so that it gets created.

Upon failing to open the file (fstream, ofstream, doesn't matter)
however, the failbit gets set, and the second open operation (which
would succeed) DOES NOT RESET the error states, even though it succeeds!

In my eyes, that's a very nasty design of the class, especially
considering that the 2nd open succeeds, instead of at least(!) throwing
an exception due to the set failbit. It is definitely missing some
mention in the documentation on cplusplus.com/reference

So what you have to do after a failed open is simply fstream::clear() -
then you can create the file and write to it.

After having found the source of my problem, I also found a discussion
about it here (credit to those guys for finding out that is is actually
the write/read operations AFTER the successful 2nd open, that fail due
to the set failbit, not the open itself):
http://www.allegro.cc/forums/thread/594765/721477

Proof of concept code below, simply set (const int workingCode = 0;) to
see the bad behaviour, and (const int workingCode = 1;) to see the
working behaviour.

Best Regards,

Lars

----
#include <iostream>
#include <fstream>

using namespace std;

int main()
{
const char *filename = "new.txt";
const int workingCode = 1;

fstream fstr;

remove (filename);
fstr.open (filename, ios_base::in | ios_base::eek:ut | ios_base::binary);
if (fstr.fail()) {
cout << "file doesn't exist, creating new one" << endl;
fstr.open (filename, ios_base::binary | ios_base::eek:ut | ios_base::app);
if (workingCode) fstr.clear();
}

fstr << "1";
fstr.close();

return 0;
}
 
J

James Kanze

Works fine as long as the file already exists. Try this:
int main()
{
std::fstream fstr("new.txt", ios_base::in | ios_base::eek:ut |
ios_base::binary);
if (!fstr.is_open()) {
cout << "creating file" << endl;
fstr.open ("new.txt", ios_base::binary | ios_base::eek:ut);
if (!fstr.is_open()) cout << "file still not open" << endl;
}
fstr.seekp (0, std::ios_base::end);
cout << "position in file = " << fstr.tellp() << endl;
fstr.close();
return 0;
}
This won't work if the file doesn't exist. It WILL create the
file (file still not open message will NOT appear), but
nevertheless, tellp() will still tell us the invalid put
pointer position of -1.

That sounds like an error in the library. But it should be easy
to avoid: if you've just created the file, you know that its
length is 0, so you don't need the seek to the end.

Alternatively, you could just check once on program start-up if
the file exists, and create it (empty) if not.
In my application,
I must not overwrite the contents of the file, if it exists, but if it
doesn't, I want to create it.

C doesn't have a mode for that (except "a", which forces all
writes to the end). So nor does C++, although logically, one
would expect it.
Subsequent calls of fstr.open / fstr.close / fstr.open won't work,
because the 2nd open will sometimes (more like always) fail to find the
newly created file - probably a disk caching problem.

That is strange. I've had that sort of problem at times, but
only when the creation was on one machine, and attempt to open
the existing file on another.
Even the simple lines
fstr.open ("new.txt", ios_base::binary | ios_base::eek:ut);
fstr << "1";
will fail. The "1" simply won't BE in the newly created file.

After having closed fstr, of course.
This is on g++ (GCC) 3.4.4 (cygming special, gdc 0.12, using dmd 0.125)
- Windows XP machine (yes, I know)

Yes. Under Windows, I'd definitely use VC++, rather than g++.

One thing, however. I seem to recall that under Windows, if a
file is open (or maybe only open for writing), other processes
cannot open it. You might want to ensure that you're not
leaving the file open somewhere else.
May have to open another thread for this much easier problem -
in case noone can solve this ;)
Is there any way to make SURE a file stream (fstream) is
actually ready for output, before writing data to it?

Test the status of the stream:

if ( stream ) {
/* OK */
} else {
/* error */
}
 
L

Lars Uffmann

James said:
ios_base::app forces all writes to be at the end of the file.
You definitely don't want to use it if you're using random
access.
Ya, I know - was merely playing around trying to actually create the
file - it seems that app was the only option that actually achieved that
- but I was wrong. binary | out will do.
My first question would be: how portable do you need to be?
You've made a couple of assumptions that the standard doesn't
guarantee, but which do often hold.

Thanks for pointing that out - especially the information about the
streampos type - I'll recheck my code with regards to that. I don't need
to be portable in this case, but I like to be.
Attention: streampos is a fairly complex type, containing
information about state in the case of multibyte encodings.
I suppose multibyte encodings only apply to non binary open modes? Since
you suggested the use of long or long long? I'll do that then, since I
am definitely working with binary.
I seem to recall hearing that after seeking to the end of file,
some implementations set eofbit. If so, the write will fail.
Ouch. That'd be nasty. Got any sources on that? If I seek the end of the
file for the PUT pointer, that should definitely not happen. For the get
poiner I can understand why and actually that is perfectly acceptable
behaviour.
I'd tend to throw in an "indexFile.clear()" before the write,
just to be sure.

That's actually the solution to my initial problem of resetting fstr
properly after a failed open() :) But I guess I was playing around with
the solution already at the time your post got to my newsserver.
And of course, you definitely want to check whether the write
succeeded---it will also fail, for example, if the disk is full.
Of course. As I said elsewhere - I'll do some thorough error checking
once I got a working application :) Right now I need a new GUI engine -
Widestudios Native Application Builder just doesn't cut it. Going for a
wxWidgets Tutorial now - possibly Qt.

Thanks & Best Regards,

Lars
 
L

Lars Uffmann

Lars said:
So what you have to do after a failed open is simply fstream::clear() -
then you can create the file and write to it.

I'm really interested in feedback on this - am I understanding something
wrong, or is this a design flaw in the fstream and derived classes?

Comments, please ;)

Best Regards,

Lars
 
J

James Kanze

James Kanze wrote:

[...]
Thanks for pointing that out - especially the information
about the streampos type - I'll recheck my code with regards
to that. I don't need to be portable in this case, but I like
to be.

There is no standard portable way of getting the length of a
file, other than by reading it, and counting how many bytes
you've read. I tend to use stat/fstat when I need the size of a
file, but my portability concerns are limited to Unix and Unix
look-alikes. As I said, I think that seeking to the end, then
converting the streampos returned from a [pg]tell to a
streamoff, and the streamoff to an integer type of sufficient
size, should be portable to most widely used systems. You may
need an explicit conversion to do it, however, since streamoff
can be a user defined type, rather than just a typedef to an
integral type (but again, I wouldn't really expect this under
Unix or Windows).
I suppose multibyte encodings only apply to non binary open
modes?

Nope. I'd forgotten to mention this, but you almost certainly
want to imbue the file with the "C" locale. The "encoding" is
independent of the binary/text mode, and depends strictly on the
imbued locale.

And of course, the type streampos doesn't change---it contains
the state information even if the imbued locale doesn't use it.
Since you suggested the use of long or long long? I'll do that
then, since I am definitely working with binary.

You'll still get into trouble with things like padding and byte
order (and at least theoretically, representation, but there are
very, very few systems today that don't use 2's complement).
Define a format (or use an existing one, like XDR), format your
data to it, and output the formatted data.

(If you call ostream::write with anything but a char const*,
you'll need a reinterpret_cast. And as we all know,
reinterpret_cast means that the code is not portable, that it
depends on some aspect of the implementation.)
Ouch. That'd be nasty. Got any sources on that?

Not directly. Someone was complaining about it in one of the
forums, I think. Technically, the implementation is (or was---I
think the standard has been reworded here) correct. Strictly
speaking, the eofbit means that the next read is guaranteed to
encounter eof, and if you've done a seek to the end of file,
that condition is true, so an implementation is (or was) allowed
to set it. The problem is that if the bit is set, the next
operation will fail, regardless of what it is. And that once
set, it will only be reset by a clear(). (A more reasonable
approach might be that the next read will fail, but all other
operations will be attempted without checking the bit, and that
it is reset on a seek.)

At any rate, you should be checking for errors after each
action; if all actions fail after a seek to end of file, you
know where the problem lies.
If I seek the end of the file for the PUT pointer, that should
definitely not happen. For the get poiner I can understand why
and actually that is perfectly acceptable behaviour.

The problem is that in filebuf, the put pointer and the get
pointer are one. (At the streambuf/iostream level, it is
unspecified whether there is a single position pointer, or
separate position pointers for the get and put areas.
stringbuf/sstream uses separate pointers, filebuf/fstream a
unified pointer.)
That's actually the solution to my initial problem of
resetting fstr properly after a failed open() :) But I guess I
was playing around with the solution already at the time your
post got to my newsserver.
Of course. As I said elsewhere - I'll do some thorough error
checking once I got a working application :)

That's one aspect. The other point I was trying to make was
that error checking could help you find where things were going
wrong. When writing, for example, I'll often defer error
checking until after the close. If anything went wrong, the
error will still be there. But during debugging, it might be
useful to know just how far you got before the error occurred.
Right now I need a new GUI engine - Widestudios Native
Application Builder just doesn't cut it. Going for a wxWidgets
Tutorial now - possibly Qt.

The best I've found to date is Java Swing, over a Corba
interface:).
 
J

James Kanze

Okay, for a change, it wasn't my fault. Here's what I was
trying to do, and what actually happened:
I wanted to open a file stream for input/output operations and
if that failed (assuming that's because the file doesn't yet
exist) open it for output/append so that it gets created.

(Output/append, or just output?)
Upon failing to open the file (fstream, ofstream, doesn't
matter) however, the failbit gets set, and the second open
operation (which would succeed) DOES NOT RESET the error
states, even though it succeeds!

The second open operation isn't even tried. The rules
concerning streams are very clear: once an error bit is set (and
eofbit counts as an error bit), it remains set until explicitly
cleared by the user.

There are at least two cases where one could argue that this
isn't really what is wanted. The first is the one you've
encountered---an open is really a sort of "re-initialization",
and one sort of expects it to be treated as such. The second is
the eofbit, because it really isn't an error, and the user
doesn't normally look at it (unless some other error occurs).
Logically, there's no reason that having encountered end of file
during look-ahead, for example, the next attempt to write or to
seek backwards in the file would fail. (On the other hand, if a
read has actually failed because of end of file, it is
reasonable to require the user to have recognized this fact, and
to explicitly call clear().)

The first, at least, has been corrected in the current draft:
ifstream::eek:pen() and ofstream::eek:pen() should now call
rdbuf()->open without checking error state, and clear the error
state if the call succeeds. So sometime in the future...
In my eyes, that's a very nasty design of the class,
especially considering that the 2nd open succeeds, instead of
at least(!) throwing an exception due to the set failbit. It
is definitely missing some mention in the documentation on
cplusplus.com/reference

In the original standard, ifstream::eek:pen() and ofstream::eek:pen()
behave like every other istream or ostream function: the only
way for an error bit to be reset is for the user to explicitly
reset it. The situation with open isn't documented because it
is the "default"---open works like every other function.
 
K

kwikius

Upon failing to open the file (fstream, ofstream, doesn't matter)
however, the failbit gets set, and the second open operation (which
would succeed) DOES NOT RESET the error states, even though it succeeds!

In my eyes, that's a very nasty design of the class, especially
considering that the 2nd open succeeds, instead of at least(!) throwing
an exception due to the set failbit.

AFAIK you can control some aspects of exception throwing in regard to
the state flags, via exceptions(flags) member function in basic_ios.

That said, I'm pretty flakey about the details.

regards
Andy Little
 
L

Lars Uffmann

James said:
(Output/append, or just output?)

Just output, I dragged that mistake of mine along in the text :)

The second open operation isn't even tried.

So I would have thought :) But in my version of g++, it is tried and
succeeds. If you do the fstream::clear() _afer_ the 2nd fstream::eek:pen(),
the succeeding write (<<) will successfully put data into the file.
That's what was confusing me a lot on initial debugging - because the
file would be created, and fstr.is_open() would evaluate as true - so I
was scratching my head a lot why the succeeding write failed silently :)

The first, at least, has been corrected in the current draft:
ifstream::eek:pen() and ofstream::eek:pen() should now call
rdbuf()->open without checking error state, and clear the error
state if the call succeeds. So sometime in the future...
Well - as I stated above - the gcc 3.4.4 already succeeds in opening the
file, however, it does not clear the error state upon success. Somewhere
in the middle between two standards?

But I guess if it's been changed in the draft, that's all I can ask for
and I'm perfectly happy with it. :) Well - except that maybe
cplusplus.com should mention in the documention for [oi]fstream that the
failbit is NOT reset on a successful open, and that an explicit call to
clear() or setstate() is required.

Thanks again & Best Regards,

Lars
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,236
Members
46,822
Latest member
israfaceZa

Latest Threads

Top