J
Johnathan Doe
Hi,
I am having problems understanding bitwise idioms that I see frequently
in source code. I understand that they work at the individual bit level
and that 0 | 1 = 1, 1 & 1 = 1 and so on (understand what all the
operators mean), but when I try bit banging code it inevitably fails to
get the result I wanted.
I am trying to put together a packet to query a DNS server. I
understand there are potentially little vs. big-endian problems, but
TCP/IP Illustrated shows where the least and most significant bits are
in the packets, so it's only the serial number I have to consider with
little vs. big endian, I think... presumably it would be in network big
endian order...
But now to the bitwise stuff. Say I had a 16-bit short int, as in the
DNS header for flags, with the most significant bit meaning query or
answer (0 = query, 1 = answer.) Would this be correct to get a mask:
#define QR_MASK (1 << 16)
I think this means "move a 1 16 bit positions to the left..." Presumably
the 1 is moving to the most significant (big end) of the 16-bit word.
The rest of the word is filled with 0 bits. Also does it fall off the
end? (Am I looking for 1 << 15 or 1 << 16? I tried both bit there was
no output...)
In a word in the program, how would a hypothetical DNS server turn that
bit on without disturbing the other bits? Like this?
struct dns_struct {
unsigned short int flags;
...
};
struct dns_struct d;
d.flags |= QR_MASK;
Is that the idiom normally used? The mask is used to set and unset
bits, or is the mask used in another context?
How can I write a routine that will print a word in binary? I have
given it a go with the short program below. I am aiming for the output:
QR_BIT is set
OPCODE_QRY is set
1000100000000000
This program is not working as expected (ignore OPCODE for the moment,
that is supposed to set 4 bits from the 12th bit):
#include<stdio.h>
#include<stdlib.h>
#define QR_BIT (1 << 16)
#define OPCODE (15 << 12)
#define OPCODE_QRY (1 << 12)
void print_bin(unsigned short int n, int newline);
int main()
{
unsigned short flags = 0;
flags |= QR_BIT;
flags |= OPCODE_QRY;
if (flags & QR_BIT)
puts("QR_BIT is set");
else
puts("QR_BIT is not set");
if (flags & OPCODE_QRY)
puts("OPCODE_QRY is set");
else
puts("OPCODE_QRY is not set");
printf("flags = ");
print_bin(flags, 1);
}
void print_bin(unsigned short int n, int newline)
{
int i;
for (i = 16; i > 0; i--)
if ( (1 << i) & n )
putchar('1');
else
putchar('0');
if (newline)
putchar('\n');
}
So:
* Context in which masks are used, and how to construct them?
* How to set and turn off bits
* Dealing with bit fields of more than one bit
Thanks for your help.
Johnathan
I am having problems understanding bitwise idioms that I see frequently
in source code. I understand that they work at the individual bit level
and that 0 | 1 = 1, 1 & 1 = 1 and so on (understand what all the
operators mean), but when I try bit banging code it inevitably fails to
get the result I wanted.
I am trying to put together a packet to query a DNS server. I
understand there are potentially little vs. big-endian problems, but
TCP/IP Illustrated shows where the least and most significant bits are
in the packets, so it's only the serial number I have to consider with
little vs. big endian, I think... presumably it would be in network big
endian order...
But now to the bitwise stuff. Say I had a 16-bit short int, as in the
DNS header for flags, with the most significant bit meaning query or
answer (0 = query, 1 = answer.) Would this be correct to get a mask:
#define QR_MASK (1 << 16)
I think this means "move a 1 16 bit positions to the left..." Presumably
the 1 is moving to the most significant (big end) of the 16-bit word.
The rest of the word is filled with 0 bits. Also does it fall off the
end? (Am I looking for 1 << 15 or 1 << 16? I tried both bit there was
no output...)
In a word in the program, how would a hypothetical DNS server turn that
bit on without disturbing the other bits? Like this?
struct dns_struct {
unsigned short int flags;
...
};
struct dns_struct d;
d.flags |= QR_MASK;
Is that the idiom normally used? The mask is used to set and unset
bits, or is the mask used in another context?
How can I write a routine that will print a word in binary? I have
given it a go with the short program below. I am aiming for the output:
QR_BIT is set
OPCODE_QRY is set
1000100000000000
This program is not working as expected (ignore OPCODE for the moment,
that is supposed to set 4 bits from the 12th bit):
#include<stdio.h>
#include<stdlib.h>
#define QR_BIT (1 << 16)
#define OPCODE (15 << 12)
#define OPCODE_QRY (1 << 12)
void print_bin(unsigned short int n, int newline);
int main()
{
unsigned short flags = 0;
flags |= QR_BIT;
flags |= OPCODE_QRY;
if (flags & QR_BIT)
puts("QR_BIT is set");
else
puts("QR_BIT is not set");
if (flags & OPCODE_QRY)
puts("OPCODE_QRY is set");
else
puts("OPCODE_QRY is not set");
printf("flags = ");
print_bin(flags, 1);
}
void print_bin(unsigned short int n, int newline)
{
int i;
for (i = 16; i > 0; i--)
if ( (1 << i) & n )
putchar('1');
else
putchar('0');
if (newline)
putchar('\n');
}
So:
* Context in which masks are used, and how to construct them?
* How to set and turn off bits
* Dealing with bit fields of more than one bit
Thanks for your help.
Johnathan