It's a templated compile time constant.
I know that.
std::bitset requires a compile time constant as template parameter.
And you use it as a return value, so you have to declare the
type outside of the function. That's what I'd overlooked.
Silly design and I'd use something else if available in
standard library and more reasonable (constrained to only
valid data) than std::string.
Mostly in order to deal with 'float', as the OP requested.
But he didn't say what he wanted as output for a float.
I'm not sure yet what he really wants. Since he's outputting to
standard out, I can assume text (since you can't output binary
to standard out). But beyond that, I'm (or rather we're) just
guessing.
Otherwise std::bitset can be constructed directly from the value.
I think it's implicit in a "pure binary representation". Or at
least, that the implementation behave "as if" it were the case:
(integralType & 1) is guaranteed to expose the bit 2^0.
but it would be nice if the standard has requirements that
means a direct construction of bitset from e.g. int produces
same result as this function. My intention was to not violate
such requirements if they exist.
You know something: I've never used std::bitset. In the rare
cases where I've needed bitset's, I've simply continued using my
pre-standard class. So I don't really know too much about what
std::bitset requires or guarantees.
What I was really wondering about, however, is the
appropriateness (or the necessity) of passing through a bitset
of any kind. Why not just generate the characters '0' and '1'
directly?
Uhm, sorry, there is no such thing as little-endian with some
other byte order.
Little endian means bit numbering increases in same direction
as memory addresses, for any size unit.
Little endian means that the sub-unit numbering increases in the
same direction as the sub-units appear physically. Thus, the
Internet uses little endian bit ordering in bytes, but big
endian byte ordering in higher order elements, like integers.
If you're talking about little endian bit ordering, you're
talking about the order of the bits in a byte.
for( size_t i = sizeof(T)-1; i != size_t(-1); --i )
{
result <<= bitsPerByte;
result |= BitSet( p );
}
return result;
}
Maybe I'm misunderstanding something, but when someone says
somthine like "binary notation to standard out", I imagine
something like "00011100" (for 0x1C).
Well, that's pretty much the meaning of "notation". And you
can't output binary to standard out. So whatever else he's
asking for, it's a text representation.
The question is, of course, what he wants for float. I can
think of at least three interpretations, and I've not the
slightest idea which one he's looking for.
Well, see below: above doesn't really require a class or anything.
The class is just a convention means of getting the format you
want in the ostream. You could just as easily make it a single
function which returned a string.
But I'm not so concerned about that as I am that both your and
Juha's solution mixes data representation and i/o. I'd rather
prefer a member function that returns a pure data
representation of the binary (I used a bitset, but string,
although not ideal in the sense of constraints on value, would
be acceptable).
OK. I can understand that point of view. I presume then that
std::bitset (like my pre-standard BitVector) has a << operator
for the actual output.
Maybe I'm reading too much into the word "notation", but in my
mind, it means a textual representation; his problem is output
formatting. In which case, introducing an unnecessary
intermediate type (other than as a decorator for formatting) is
unnecessary added complexity. If, on the other hand, he needs a
representation which he can then further manipulate, std::bitset
is the "official" answer.
[Usage]:
unsigned i = 42 ;
std::cout << binary( i ) << std::endl ;
The code I posted earlier mainly tackles float and double in
addition to integrals.
For the example above you don't need such code, because you can just do
unsigned const i = 42;
std::cout << std::bitset<CHAR_BIT*sizeof(i)>( i ) << std::endl;
Note the reduction in number of lines, to just 2 (no support class).
But will it also handle user defined integral types?
On the other hand, you're right. Unless there is an absolute
need to support such types, this is a much better solution than
mine.
Well, I'm not sure that that's really a problem with your code.
I think your code would work just fine (for integral types).
It will go into an endless loop for negative values if >>
propagates the sign.
I'll repeat something I wrote in another thread a few moments
ago: I stick to unsigned types when manipulating bits.
The real question remains, however: why does he want this? What
is he trying to do? Does he want float to output something like
"1.0011001B05"? (Somehow I doubt it, but taken literally,
that's really what he asked for.) Or does he want a binary dump
of the underlying memory, which is what your code does (but
then, I would generally prefer it broken up into bytes, with a
space between each byte)?