newbie: binary represantation

T

TK

Hi,

how can I convert an integer (e.g. 5) into a binary representation
(0101)? With printf()?

Thanks.

o-o

Thomas
 
P

pete

TK said:
Hi,

how can I convert an integer (e.g. 5) into a binary representation
(0101)? With printf()?

printf doesn't really have any features to fascilitate that
kind of output.

/* BEGIN new.c */

#include <limits.h>
#include <stdio.h>
#include <assert.h>

#define BITS 4
#define STRING ("%3u = 0x%-2x = %s\n")
#define E_TYPE unsigned char
#define P_TYPE unsigned
#define INITIAL 0
#define FINAL (((1u << BITS - 1) << 1) - 1)
#define OFFSET (sizeof(e_type) * CHAR_BIT - BITS)
#define INC(E) (++(E))

typedef E_TYPE e_type;
typedef P_TYPE p_type;

void bitstr(char *str, const void *obj, size_t n);

int main(void)
{
e_type e;
char ebits[CHAR_BIT * sizeof e + 1];

assert(CHAR_BIT >= BITS && BITS >= 1);
puts("\n/* BEGIN output from new.c */\n");
e = INITIAL;
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, (p_type)e, OFFSET + ebits);
while (FINAL > e) {
INC(e);
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, (p_type)e, OFFSET + ebits);
}
puts("\n/* END output from new.c */");
return 0;
}

void bitstr(char *str, const void *obj, size_t n)
{
unsigned mask;
const unsigned char *byte = obj;

while (n-- != 0) {
mask = ((unsigned char)-1 >> 1) + 1;
do {
*str++ = (char)(mask & byte[n] ? '1' : '0');
mask >>= 1;
} while (mask != 0);
}
*str = '\0';
}

/* END new.c */
 
M

Malcolm

TK said:
Hi,

how can I convert an integer (e.g. 5) into a binary representation (0101)?
With printf()?

Write a function

void itob(char *ret, int x)

(Make sure ret is a buffer big enough to hold the biggest possible binary
number).

To get the last digit, AND with 1. If the rules is non-zero, the digit is 1,
else it is 0.
Then shift x right by one place, and repeat to get the next digit.
Continue until x is zero.
(Make sure that if x is exactly zero you get at least 1 zero, and also
decide what to do about negative numbers).
Then reverse the digits you have. The easiest way is to put them into a
local temporary buffer as you work them out, and then copy to the caller's
buffer.
 
K

Keith Thompson

Malcolm said:
Write a function

void itob(char *ret, int x)

(Make sure ret is a buffer big enough to hold the biggest possible binary
number).

To get the last digit, AND with 1. If the rules is non-zero, the digit is 1,
else it is 0.
Then shift x right by one place, and repeat to get the next digit.
Continue until x is zero.
(Make sure that if x is exactly zero you get at least 1 zero, and also
decide what to do about negative numbers).
Then reverse the digits you have. The easiest way is to put them into a
local temporary buffer as you work them out, and then copy to the caller's
buffer.

In addition to that, either decide how to handle negative values or
make the second argument unsigned.
 
A

August Karlstrom

TK said:
how can I convert an integer (e.g. 5) into a binary representation
(0101)? With printf()?

$ cat test.c
#include <stdio.h>

void print_bits(int n, int digits)
{
int k;
int bitCount = 8 * sizeof n;
int upper = (digits < bitCount)? digits - 1: bitCount - 1;

for (k = upper; k >= 0; k--) {
printf("%i", (n >> k) & 0x1);
}
}


int main(void)
{
print_bits(5, 4); puts("");
print_bits(5, 8); puts("");
return 0;
}

$ ./test
0101
00000101


August
 
S

Skarmander

pete said:
printf doesn't really have any features to fascilitate that
kind of output.

/* BEGIN new.c */

#include <limits.h>
#include <stdio.h>
#include <assert.h>

#define BITS 4
#define STRING ("%3u = 0x%-2x = %s\n")
#define E_TYPE unsigned char
#define P_TYPE unsigned
#define INITIAL 0
#define FINAL (((1u << BITS - 1) << 1) - 1)
#define OFFSET (sizeof(e_type) * CHAR_BIT - BITS)
#define INC(E) (++(E))
I don't see what all these macros are supposed to accomplish. The intent
here could be to support abstract integer types through macros, but this is
not the core of the problem.
typedef E_TYPE e_type;
typedef P_TYPE p_type;
Seems redundant.
void bitstr(char *str, const void *obj, size_t n);

int main(void)
{
e_type e;
char ebits[CHAR_BIT * sizeof e + 1];

assert(CHAR_BIT >= BITS && BITS >= 1);

If BITS cannot exceed CHAR_BIT, e_type cannot be anything but a char type.
You probably meant CHAR_BIT * sizeof e.
puts("\n/* BEGIN output from new.c */\n");
e = INITIAL;
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, (p_type)e, OFFSET + ebits);
while (FINAL > e) {
INC(e);
bitstr(ebits, &e, sizeof e);
printf(STRING, (p_type)e, (p_type)e, OFFSET + ebits);
}
puts("\n/* END output from new.c */");
return 0;
}

void bitstr(char *str, const void *obj, size_t n)
{
unsigned mask;
const unsigned char *byte = obj;

while (n-- != 0) {
mask = ((unsigned char)-1 >1) + 1;
do {
*str++ = (char)(mask & byte[n] ? '1' : '0');
mask >>= 1;
} while (mask != 0);
}
*str = '\0';
}

/* END new.c */

bitstr() could be used to determine the platform representation of integers;
the whole main() thing looks like fluff.

With due respect to your approach, I suggest this as more straightforward:

#include <stdio.h>
#include <limits.h>
#include <math.h>
#include <assert.h>

#define MAXBITS(t) (CHAR_BIT * sizeof(t))

/* Write an 'ndigits'-long binary representation of 'i' at 'buf' */
void bitseq(unsigned int i, char buf[], int ndigits) {
unsigned int mask = 1;

assert(ndigits >= 0); /* Or any other sensible error handling */
while (ndigits != 0) {
buf[--ndigits] = i & mask ? '1' : '0';
mask <<= 1;
};
}

/* Write an 'ndigits'-long binary representation of 'i' at 'buf' and
terminate with a NUL; note that the buffer must be at least ndigits + 1
bytes long */
void bitstr(unsigned int i, char buf[], int ndigits) {
bitseq(i, buf, ndigits);
buf[ndigits] = '\0';
}

int binary_digits(unsigned int n) {
return n == 0 ? 1 : (int) log2((double) n) + 1;
/* 'log2' only exists in C99 or as an extension. In C89, the
following could be used: */
return n == 0 ? 1 : (int) (log((double) n) / log(2.0)) + 1;
/* If floating point is slow or unavailable, integer-based methods
exist; these are not covered here. */
}

int main(void) {
unsigned int x = 5;
char s[MAXBITS(x) + 1]; /* + 1 for terminating NUL */

/* Print padded binary representation */
bitstr(x, s, sizeof s - 1);
printf("%u is %s\n", x, s);

/* Print minimal binary representation */
bitstr(x, s, binary_digits(x));
printf("%u is %s\n", x, s);

/* Print two digits: result too long, truncated at least
significant bits */
bitstr(x, s, 2);
printf("%u is not %s\n", x, s);

return 0;
}

If "unsigned int" is not good enough as a common denominator, another type
can be used, or even uintmax_t for the largest integer type available (this
exists only in C99). A fully generic approach was given by pete's code
above, but this may be overkill, and furthermore requires that you know the
size in bits of integers you wish to print, and what type is large enough to
hold them.

Signed integers cannot be printed directly with this method. If the
representation of the platform is desired, use pete's bitstr(). If a
specific representation is desired, convert the integer to unsigned in the
appropriate manner. For two's complement, it's sufficient to simply cast the
signed integer to unsigned.

Other approaches are certainly possible depending on what is needed: the
buffer could be dynamically allocated and the function could count how many
digits are actually written or would have been written. This is the most
straightforward approach I can think of, though.

S.
 
P

pete

I don't see what all these macros are supposed to accomplish.

OP's example shows a 4 bit representation.
The program as posted, outputs 4 bit representations.

/* BEGIN output from new.c */

0 = 0x0 = 0000
1 = 0x1 = 0001
2 = 0x2 = 0010
3 = 0x3 = 0011
4 = 0x4 = 0100
5 = 0x5 = 0101
6 = 0x6 = 0110
7 = 0x7 = 0111
8 = 0x8 = 1000
9 = 0x9 = 1001
10 = 0xa = 1010
11 = 0xb = 1011
12 = 0xc = 1100
13 = 0xd = 1101
14 = 0xe = 1110
15 = 0xf = 1111

/* END output from new.c */

The value of the BITS macro can be varried to anything
from CHAR_BIT to 1 inclusive.
The other macros are just fluff.
bitstr is an old function that I had.
 
P

pete

pete said:
OP's example shows a 4 bit representation.
The program as posted, outputs 4 bit representations.
The value of the BITS macro can be varried to anything
from CHAR_BIT to 1 inclusive.
The other macros are just fluff.
bitstr is an old function that I had.

When I started writing the program,
I had originally considered
that it might output the binary representations
for types with more than one byte,
but I didn't feel like resolving endian issues.
 
E

Emmanuel Delahaye

TK a écrit :
how can I convert an integer (e.g. 5) into a binary representation
(0101)? With printf()?

You can, but not directly.You must write you own numeric to binary text
function.
 
J

John Smith

TK said:
Hi,

how can I convert an integer (e.g. 5) into a binary representation
(0101)? With printf()?

Thanks.

o-o

Thomas

the code in the loop implements a generalized algorithm for
converting base 10 to any base:

#include <stdio.h>
#include <stdlib.h>
#define SIZE 100

int main(int argc, char *argv[])
{
unsigned long long n, digit, base, pv;
unsigned long long nn[SIZE] = {0};
int index, idx;

if(argc != 3)
{
fprintf(stderr, "usage: pgm <n> <base>\n");
exit(EXIT_FAILURE);
}
n = strtoull(argv[1], NULL, 10);
base = strtoull(argv[2], NULL, 10);
pv = 1;
index = SIZE - 1;

printf("\nbig endian base %llu\n", base);
printf("representation of %llu:\n\n", n);
do {
digit = n % base;
/* the subtraction step is included to document the
algorithm, but is unnecessary with integer division */
n -= digit;
n /= base;
nn[index] = digit;
pv *= base;
index--;
} while(n != 0);

/* display in big endian format */
for(idx = index+1; idx < SIZE; idx++)
{
printf(" %4llu", nn[idx]);
}
putchar('\n');

return 0;
}
 
J

John Smith

John said:
TK said:
Hi,

how can I convert an integer (e.g. 5) into a binary representation
(0101)? With printf()?

Thanks.

o-o

Thomas


the code in the loop implements a generalized algorithm for converting
base 10 to any base:

#include <stdio.h>
#include <stdlib.h>
#define SIZE 100

int main(int argc, char *argv[])
{
unsigned long long n, digit, base, pv;
unsigned long long nn[SIZE] = {0};
int index, idx;

if(argc != 3)
{
fprintf(stderr, "usage: pgm <n> <base>\n");
exit(EXIT_FAILURE);
}
n = strtoull(argv[1], NULL, 10);
base = strtoull(argv[2], NULL, 10);
pv = 1;
index = SIZE - 1;

printf("\nbig endian base %llu\n", base);
printf("representation of %llu:\n\n", n);
do {
digit = n % base;
/* the subtraction step is included to document the
algorithm, but is unnecessary with integer division */
n -= digit;
n /= base;
nn[index] = digit;
pv *= base;
index--;
} while(n != 0);

/* display in big endian format */
for(idx = index+1; idx < SIZE; idx++)
{
printf(" %4llu", nn[idx]);
}
putchar('\n');

return 0;
}

Omit the line: pv *= base. It was intended to be used to check
the result; it's not part of the algorithm.

JS
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
474,172
Messages
2,570,933
Members
47,472
Latest member
blackwatermelon

Latest Threads

Top