pete said:
printf doesn't really have any features to fascilitate that
kind of output.
/* BEGIN new.c */
#include <limits.h>
#include <stdio.h>
#include <assert.h>
#define BITS 4
#define STRING ("%3u = 0x%-2x = %s\n")
#define E_TYPE unsigned char
#define P_TYPE unsigned
#define INITIAL 0
#define FINAL (((1u << BITS - 1) << 1) - 1)
#define OFFSET (sizeof(e_type) * CHAR_BIT - BITS)
#define INC(E) (++(E))
I don't see what all these macros are supposed to accomplish. The intent
here could be to support abstract integer types through macros, but this is
not the core of the problem.
typedef E_TYPE e_type;
typedef P_TYPE p_type;
Seems redundant.
void bitstr(char *str, const void *obj, size_t n);
int main(void)
{
e_type e;
char ebits[CHAR_BIT * sizeof e + 1];
assert(CHAR_BIT >= BITS && BITS >= 1);
If BITS cannot exceed CHAR_BIT, e_type cannot be anything but a char type.
You probably meant CHAR_BIT * sizeof e.
puts("\n/* BEGIN output from new.c */\n");
e = INITIAL;
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, (p_type)e, OFFSET + ebits);
while (FINAL > e) {
INC(e);
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, (p_type)e, OFFSET + ebits);
}
puts("\n/* END output from new.c */");
return 0;
}
void bitstr(char *str, const void *obj, size_t n)
{
unsigned mask;
const unsigned char *byte = obj;
while (n-- != 0) {
mask = ((unsigned char)-1 >1) + 1;
do {
*str++ = (char)(mask & byte[n] ? '1' : '0');
mask >>= 1;
} while (mask != 0);
}
*str = '\0';
}
/* END new.c */
bitstr() could be used to determine the platform representation of integers;
the whole main() thing looks like fluff.
With due respect to your approach, I suggest this as more straightforward:
#include <stdio.h>
#include <limits.h>
#include <math.h>
#include <assert.h>
#define MAXBITS(t) (CHAR_BIT * sizeof(t))
/* Write an 'ndigits'-long binary representation of 'i' at 'buf' */
void bitseq(unsigned int i, char buf[], int ndigits) {
unsigned int mask = 1;
assert(ndigits >= 0); /* Or any other sensible error handling */
while (ndigits != 0) {
buf[--ndigits] = i & mask ? '1' : '0';
mask <<= 1;
};
}
/* Write an 'ndigits'-long binary representation of 'i' at 'buf' and
terminate with a NUL; note that the buffer must be at least ndigits + 1
bytes long */
void bitstr(unsigned int i, char buf[], int ndigits) {
bitseq(i, buf, ndigits);
buf[ndigits] = '\0';
}
int binary_digits(unsigned int n) {
return n == 0 ? 1 : (int) log2((double) n) + 1;
/* 'log2' only exists in C99 or as an extension. In C89, the
following could be used: */
return n == 0 ? 1 : (int) (log((double) n) / log(2.0)) + 1;
/* If floating point is slow or unavailable, integer-based methods
exist; these are not covered here. */
}
int main(void) {
unsigned int x = 5;
char s[MAXBITS(x) + 1]; /* + 1 for terminating NUL */
/* Print padded binary representation */
bitstr(x, s, sizeof s - 1);
printf("%u is %s\n", x, s);
/* Print minimal binary representation */
bitstr(x, s, binary_digits(x));
printf("%u is %s\n", x, s);
/* Print two digits: result too long, truncated at least
significant bits */
bitstr(x, s, 2);
printf("%u is not %s\n", x, s);
return 0;
}
If "unsigned int" is not good enough as a common denominator, another type
can be used, or even uintmax_t for the largest integer type available (this
exists only in C99). A fully generic approach was given by pete's code
above, but this may be overkill, and furthermore requires that you know the
size in bits of integers you wish to print, and what type is large enough to
hold them.
Signed integers cannot be printed directly with this method. If the
representation of the platform is desired, use pete's bitstr(). If a
specific representation is desired, convert the integer to unsigned in the
appropriate manner. For two's complement, it's sufficient to simply cast the
signed integer to unsigned.
Other approaches are certainly possible depending on what is needed: the
buffer could be dynamically allocated and the function could count how many
digits are actually written or would have been written. This is the most
straightforward approach I can think of, though.
S.