Hi,
I build my executable on a 2.4.x kernel and was running as expected on the same system. Where as when I tried to run it on 2.6.x kernel it was dumping core :-x :-x and the stack trace shows rdstate()
My code check the stream status before calling socket read() function. I'm using
ios *x;
if((x->rdstate() & ios::badbit) != 0)
{
return 0;
}
else
{
....
...
sock->read(...)
..
..
}
When I observed there is diff in rdstate() output on 2.4.x & 2.6.x linux servers
On 2.6.x kernel
ios --> rdstate():0 badbit:4 failbit:2 eofbit:1 ==> for good conditon
ios --> rdstate():192 badbit:4 failbit:2 eofbit:1 ==> for bad condition
On 2.4.x kernel
ios --> rdstate():172 badbit:4 failbit:2 eofbit:1 ==> for good condition
ios --> rdstate():14 badbit:4 failbit:2 eofbit:1 ==> for bad condition
I'd like to know whey rdstate() is returning diff values for diff kernels. Any help would be greatly appreciated
I build my executable on a 2.4.x kernel and was running as expected on the same system. Where as when I tried to run it on 2.6.x kernel it was dumping core :-x :-x and the stack trace shows rdstate()
My code check the stream status before calling socket read() function. I'm using
ios *x;
if((x->rdstate() & ios::badbit) != 0)
{
return 0;
}
else
{
....
...
sock->read(...)
..
..
}
When I observed there is diff in rdstate() output on 2.4.x & 2.6.x linux servers
On 2.6.x kernel
ios --> rdstate():0 badbit:4 failbit:2 eofbit:1 ==> for good conditon
ios --> rdstate():192 badbit:4 failbit:2 eofbit:1 ==> for bad condition
On 2.4.x kernel
ios --> rdstate():172 badbit:4 failbit:2 eofbit:1 ==> for good condition
ios --> rdstate():14 badbit:4 failbit:2 eofbit:1 ==> for bad condition
I'd like to know whey rdstate() is returning diff values for diff kernels. Any help would be greatly appreciated