Are these the best set of bitset macros in the world or what!?

B

Brian

// This macro is key
//
#define bitn(n) (1 << n)

// Define some stuff to "document" code without documenting anything
("self-documenting code")!
//
#define bitset8 uint8
#define bitset16 uint16
#define bitset32 uint32
#define bitset64 uint64

// Gotta love "close to the metal"!
//
#define chkbit(x, n) (((x) & bitn(n)) ne 0)
#define setbit(x, n) ((x) |= bitn(n))
#define clrbit(x, n) ((x) &= ~bitn(n))
#define tglbit(x, n) ((x) ^= bitn(n))

// WINDOW_THICK_BORDER|WINDOW_MINMAXBUTTONS anyone?
//
#define chkmask(x, m) (((x) & (m)) eq m)
#define setmask(x, m) ((x) |= (m))
#define clrmask(x, m) ((x) &= (~m))
#define tglmask(x, m) ((x) ^= (m))

// some uses may require the following
#define bitn64(n) (1ui64 << n)

**********

Aside, I use n-1 rather than n in the bitn macros, knowing that there is
potential for error there. I prefer, in this case, great semantics rather
than submission to the language.
 
I

Ian Collins

// This macro is key
//
#define bitn(n) (1<< n)

A common one at that.
// Define some stuff to "document" code without documenting anything
("self-documenting code")!
//
#define bitset8 uint8
#define bitset16 uint16
#define bitset32 uint32
#define bitset64 uint64

These can be tidied up to

typedef uint8_t bitset8;
typedef uint16_t bitset16;
typedef uint32_t bitset32;
typedef uint64_t bitset64;
// Gotta love "close to the metal"!
//
#define chkbit(x, n) (((x)& bitn(n)) ne 0)
ne?

#define setbit(x, n) ((x) |= bitn(n))
#define clrbit(x, n) ((x)&= ~bitn(n))
#define tglbit(x, n) ((x) ^= bitn(n))

// WINDOW_THICK_BORDER|WINDOW_MINMAXBUTTONS anyone?
//
#define chkmask(x, m) (((x)& (m)) eq m)
eq?

#define setmask(x, m) ((x) |= (m))
#define clrmask(x, m) ((x)&= (~m))
#define tglmask(x, m) ((x) ^= (m))

// some uses may require the following
#define bitn64(n) (1ui64<< n)
Eh?

**********

Aside, I use n-1 rather than n in the bitn macros, knowing that there is
potential for error there.

Do you?
 
B

Barry Schwarz

Not really. Similar ones have been posted here for years and the
concept probably precedes Usenet.
// This macro is key
//
#define bitn(n) (1 << n)

You remembered the parentheses almost every where else, why not here?
And what happens if you want to test the sign bit? You might avoid
syntax errors and undefined behavior with
#define bit(n) (1u << (n))

While I don't see the need, some might even want 1lu or 1llu.
// Define some stuff to "document" code without documenting anything
("self-documenting code")!
//
#define bitset8 uint8
#define bitset16 uint16
#define bitset32 uint32
#define bitset64 uint64

Don't you think these would be better as typedef? And the standard
nomenclature is uintN_t.
// Gotta love "close to the metal"!
//
#define chkbit(x, n) (((x) & bitn(n)) ne 0)

There is a macro not_eq but ne is not a keyword.
#define setbit(x, n) ((x) |= bitn(n))
#define clrbit(x, n) ((x) &= ~bitn(n))
#define tglbit(x, n) ((x) ^= bitn(n))

// WINDOW_THICK_BORDER|WINDOW_MINMAXBUTTONS anyone?
//
#define chkmask(x, m) (((x) & (m)) eq m)

When did "eq" become a keyword?
#define setmask(x, m) ((x) |= (m))
#define clrmask(x, m) ((x) &= (~m))
#define tglmask(x, m) ((x) ^= (m))

// some uses may require the following
#define bitn64(n) (1ui64 << n)

The n still needs parentheses.
 
J

James Dow Allen

More interesting are functions to work with strings
of bits that might span a boundary. I posted such
not so long ago but discussion focused solely (and
rather inconclusively) on the usability of CHAR_BIT.

(A real pain on Little-Endian machines is that bit
numbers depend on data-type, i.e. bit #21 is in different
places depending on whether you access the "bit array"
with short or long. This is not a problem with Big-Endian.)

#define chkmask(x, m) (((x) & (m)) eq m)

You do like #defines, don't you?
Are there any you're not showing?
#define bitn(n) (1 << n)

Shouldn't this be
#define bitn(n) (1 leftshift n) or perhaps (see below)
#define bitn(n) (really_one leftshift n)
Aside, I use n-1 rather than n in the bitn macros, knowing that there is
potential for error there. I prefer, in this case, great semantics rather
than submission to the language.

Do I guess right to think you refuse to "suspend disbelief"
and insist that the FIRST token be the 1st, not the 0th?
My guess is c.l.c regulars from BOTH sides of the aisle
will recommend against this. Will that matter to you?

James Dow Allen
 
M

Marcin Grzegorczyk

James said:
More interesting are functions to work with strings
of bits that might span a boundary. I posted such
not so long ago but discussion focused solely (and
rather inconclusively) on the usability of CHAR_BIT.

(A real pain on Little-Endian machines is that bit
numbers depend on data-type, i.e. bit #21 is in different
places depending on whether you access the "bit array"
with short or long. This is not a problem with Big-Endian.)

I think you've got it the wrong way. It's little-endian machines that
are consistent in this respect.

Anyway, the only portable way is to access the array with unsigned char.
Larger types get you in the realm of undefined behaviour (due to
issues like misaligned accesses, permitted type aliasing rules, and, at
least in theory, padding bits). Although it can be a bit painful indeed
if you want to extract (portably, of course) a substring of more than
CHAR_BIT bits.
 
B

Brian

Ian said:
A common one at that.


These can be tidied up to

typedef uint8_t bitset8;
typedef uint16_t bitset16;
typedef uint32_t bitset32;
typedef uint64_t bitset64;

OK, typedefs I am fine with. I have the uintX things macro'd to be the
platform things such as unsigned __int32.

// From prior header file
//
#define ne !=
#define eq ==

I didn't think those things would be hard to ascertain even if I left
them out for they are so obvious (to me). (I also have 'and' defined as
'&&' and 'or' defined as '||', amongst many other things, as I think it
cleans up ugly C syntax, especially with syntax-highlighting editors).

I have the following kinds of inlines (C++) and else the compiler
complains:

Inline bool32 chkbit64(uint64 x, uint32 n){ return ((x & bitn64(n)) ne
0); }

Yes I do. I used to have it as just n, but I like the other semantics
much better. Maybe if it bites me hard enough in the future I'll rethink
that decision.
 
B

Brian

Barry said:
Not really. Similar ones have been posted here for years and the
concept probably precedes Usenet.


You remembered the parentheses almost every where else, why not here?

Agreed, can't hurt. Note though that the intent of the macros was for use
with the bitsetX things defined below.
And what happens if you want to test the sign bit? You might avoid
syntax errors and undefined behavior with
#define bit(n) (1u << (n))

While I don't see the need, some might even want 1lu or 1llu.

OK on 1u. But the compiler wouldn't really change to signed if an
unsigned was passed (guaranteed with use of the bitsetX defines below),
would it?
Don't you think these would be better as typedef? And the standard
nomenclature is uintN_t.

I use these kinds of things for unitX:

typedef unsigned __int32 uint32; // I may be too spoiled having a C++
compiler
There is a macro not_eq but ne is not a keyword.

I define in in a prior header file with the obvious syntax.
When did "eq" become a keyword?

It's another define I use that makes the code more readable and less
error-prone.
The n still needs parentheses.

Yes, but that is safety for unintended usage (not using the bitsetX
typedefs), but no harm putting them there. 'twas an error of omission.
 
B

Brian

James said:
More interesting are functions to work with strings
of bits that might span a boundary.

I have another set of defines for arbitrary-length bitsets. I find both
useful but for different situations.
I posted such
not so long ago but discussion focused solely (and
rather inconclusively) on the usability of CHAR_BIT.

(A real pain on Little-Endian machines is that bit
numbers depend on data-type, i.e. bit #21 is in different
places depending on whether you access the "bit array"
with short or long. This is not a problem with Big-Endian.)

That's a reason for having both arbitrary-length bitset macros and the
simple ones I posted. The arbitrary-length macros, I consider bitmaps
(with higher numbered bits increasing monotonically toward higher memory
addresses) while the unsigned int bitset macros I consider a set of flags
(such as characteristics of a window: border, etc.), where I don't care
how the bits are stored.
You do like #defines, don't you?
Are there any you're not showing?

Yes, obviously. See the new prior posts I just made.
Shouldn't this be

I actually have the following define:

#define lshft <<

but am not as strict about using it as I am the other ones, and I guess I
wanted the << for immediate obviousness of the header's intent without
having to read any documentation.
or perhaps (see below)

That would be going overboard.
Do I guess right to think you refuse to "suspend disbelief"
and insist that the FIRST token be the 1st, not the 0th?
My guess is c.l.c regulars from BOTH sides of the aisle
will recommend against this. Will that matter to you?

I'm trying it out because I also have a set of arrays that index from one
and not zero and I think overall it is easier to program with 1-based
design and I am planning on moving to a language that has such. I know,
it's not a very C-like thing to be doing. It's a personal library also,
so it's not like I have to train anyone to the rule.
 
B

Brian

Eric said:
`bitn(42)' will probably disappoint you. `bitn(v & 0x3)' most
certainly will.

bitn macro is for use by the other macros. The other macros take
"arguments" of type bitsetX. Yes, the programmer must know how wide the
bitset variable is.
 
E

Eric Sosman

Ian said:
// This macro is key
//
#define bitn(n) (1<< n)
[...]
I have the following kinds of inlines (C++) and else the compiler
complains:

Inline bool32 chkbit64(uint64 x, uint32 n){ return ((x& bitn64(n)) ne
0); }

Doesn't help in C; the bug is still present. Can't speak
for C++; that's a different language and a different newsgroup.
Yes I do. I used to have it as just n, but I like the other semantics
much better. Maybe if it bites me hard enough in the future I'll rethink
that decision.

Perhaps you'll forgive me for observing that correctness trumps
a Bunthorne-like preference for semantic niceties. In other words,
stylistic considerations regarding bug-ridden code are silly.
 
S

Seebs

// From prior header file
//
#define ne !=
#define eq ==

Never, EVER, do that.
I didn't think those things would be hard to ascertain even if I left
them out for they are so obvious (to me).

One can make a reasonable guess, but:

1. Why the hell would you ever do that? The existing operators are
perfectly clear.
2. Many people come up with crazy macros like that that end up doing
something non-obvious. So you're forcing readers to go look.
(I also have 'and' defined as
'&&' and 'or' defined as '||', amongst many other things, as I think it
cleans up ugly C syntax, especially with syntax-highlighting editors).

What this does is make your code worthless to everyone else. Don't try to
"clean up" a language you don't like -- either write in the language, or
write in a different language.

To put it in perspective, consider that the International Obfuscated C
Code Contest has its roots in this kind of thing.
Yes I do. I used to have it as just n, but I like the other semantics
much better. Maybe if it bites me hard enough in the future I'll rethink
that decision.

Well, it's a moot point with the other macro silliness, but this would be
another thing that would keep people from using your code.

C counts from zero. Any time you try to cover this up, you are creating
a large number of potential off-by-one errors.

-s
 
I

Ian Collins

I use these kinds of things for unitX:

typedef unsigned __int32 uint32; // I may be too spoiled having a C++
compiler

This has nothing to do with C++. Why don't you just the the standard
typedefs?
 
I

Ian Collins

OK, typedefs I am fine with. I have the uintX things macro'd to be the
platform things such as unsigned __int32.

Why use macros at all?
// From prior header file
//
#define ne !=
#define eq ==
Pointless.


I have the following kinds of inlines (C++) and else the compiler
complains:

Inline bool32 chkbit64(uint64 x, uint32 n){ return ((x& bitn64(n)) ne
0); }

That's neither C, no C++. Unless you are using more silly macros. In
C++ there no need to use macros for bit manipulation.
Yes I do.

Where?
 
E

Eric Sosman

Barry said:
[...]
And what happens if you want to test the sign bit? You might avoid
syntax errors and undefined behavior with
#define bit(n) (1u<< (n))

While I don't see the need, some might even want 1lu or 1llu.

OK on 1u. But the compiler wouldn't really change to signed if an
unsigned was passed (guaranteed with use of the bitsetX defines below),
would it?

You're *still* not seeing the problem, are you? All, right,
let's go through it step by step:

- What is the type of `1'?
- How many non-padding bits are in the type of `1' (on any
given implementation, obviously, since C does not specify
a maximum for this quantity)?
- Call that bit count B. What happens if `n >= B'?
- Returning to the type of `1': Is it a signed or an unsigned type?
- If it happens to be a signed type, what happens if `n >= B-1'?
I use these kinds of things for unitX:

typedef unsigned __int32 uint32; // I may be too spoiled having a C++
compiler


I define in in a prior header file with the obvious syntax.

"Beautiful new impediments to understanding," as the annotations
to the Ten Commandments put it.
It's another define I use that makes the code more readable and less
error-prone.

BNITU.
 
B

Ben Pfaff

Seebs said:
Never, EVER, do that.

If you really want to do that, just #include <iso646.h>:

7.9 Alternative spellings <iso646.h>

The header <iso646.h> defines the following eleven macros (on
the left) that expand to the corresponding tokens (on the
right):

and &&
and_eq &=
bitand &
bitor |
compl ~
not !
not_eq !=
or ||
or_eq |=
xor ^
xor_eq ^=
 
B

Ben Pfaff

Brian said:
// Gotta love "close to the metal"!
//
#define chkbit(x, n) (((x) & bitn(n)) ne 0)
#define setbit(x, n) ((x) |= bitn(n))
#define clrbit(x, n) ((x) &= ~bitn(n))
#define tglbit(x, n) ((x) ^= bitn(n))

What's the benefit of these macros? How is
setbit(x, n)
easier to read or to write than
x |= 1 << n;
 
E

Eric Sosman

bitn macro is for use by the other macros. The other macros take
"arguments" of type bitsetX. Yes, the programmer must know how wide the
bitset variable is.

With the definition given, and even if `n' is a single identifier
or constant, the programmer's knowledge helps not one whit. Damage
already done, the moving finger writes, try to do it better yesterday.
 
B

Brian

Seebs said:
Never, EVER, do that.


One can make a reasonable guess, but:

1. Why the hell would you ever do that? The existing operators are
perfectly clear.

The macros are clearer and less error-prone.
2. Many people come up with crazy macros like that that end up doing
something non-obvious. So you're forcing readers to go look.

Anyone who couldn't figure out from the context what those macros meant
need not be doing any coding IMO.
What this does is make your code worthless to everyone else.

Good thing my code is not for anyone else huh.
Don't
try to "clean up" a language you don't like -- either write in the
language, or write in a different language.

Make me. :p Anything that cleans up ugly C syntax is a boon. I'll further
that and say those who don't create things to abstract away the low level
are not very good coders, or JUST coders. Everything implemented at level
0 is not a good design for any software. C is just a hammer. If you want
more tools, then you have to create them. The couple of macros above are
just tidbits.
To put it in perspective, consider that the International Obfuscated C
Code Contest has its roots in this kind of thing.

C is inherently obfuscated ("cryptic"). Keywords would be better than the
operators and with the macros, I have the equivalent. The macros above
are a teeny tiny step to clean up the langugage. C++ actually defines
macros in the standard such as those, I believe, and like I said, in
these modern times of syntax-highlighting editors, those macros really
come into their own. Try it, your code will look much nicer, be more
readable in that IDE and be less error-prone too. :)
Well, it's a moot point with the other macro silliness, but this
would be another thing that would keep people from using your code.

C counts from zero. Any time you try to cover this up, you are
creating a large number of potential off-by-one errors.

Not if you abstract it away properly.
 
B

Brian

Ben said:
If you really want to do that, just #include <iso646.h>:

7.9 Alternative spellings <iso646.h>

The header <iso646.h> defines the following eleven macros (on
the left) that expand to the corresponding tokens (on the
right):

and &&
and_eq &=
bitand &
bitor |
compl ~
not !
not_eq !=
or ||
or_eq |=
xor ^
xor_eq ^=

OK, I thought it was a C++ thing and that's where I got the idea. I just
extended it and use my own header as I try to avoid standard headers as
much as possible and stick with the language proper.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,954
Messages
2,570,114
Members
46,702
Latest member
VernitaGow

Latest Threads

Top