bit fields pointers?

M

mazzawi

is it possible to have pointers to bit fields?
i read somewhere that you can't.

but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

will this give me an array of bitfields?

or double array
u_int3_t Array[100][100];

I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.

Thanks,
 
A

Arthur J. O'Dwyer

is it possible to have pointers to bit fields?
i read somewhere that you can't.

Right. You can't.
but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

will this give me an array of bitfields?

No, of course not. It will give you an array of 100 objects of type
'u_int3_t', whatever that is. If you added the line

typedef struct u_int3_t u_int3_t;

to your program, above that declaration, then you'd have an array of
100 objects of type 'struct u_int3_t', and each of those objects would
contain a bitfield; but that's not really the same thing as an "array
of bitfields."
I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.

Read your compiler's documentation to see whether it provides
useful non-standard extensions such as __attribute__((packed)) or
a __packed keyword. Standard C provides no such attribute, though.

With most compilers, your best bet will be something like

/* Hope this takes exactly 24 bits! */
__packed struct u_8int3_t { /* __packed is not standard */
int f0: 3;
int f1: 3;
int f2: 3;
int f3: 3;
int f4: 3;
int f5: 3;
int f6: 3;
int f7: 3;
};

struct u_8int3_t arr[13]; /* 13 >= 100/8 */

However, this whole thing smells of premature optimization. You
should write the code in the most natural way first, and then test
it to see whether you really /need/ to go through all this pain.

HTH,
-Arthur
 
J

jaysome

is it possible to have pointers to bit fields?
i read somewhere that you can't.

but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

That should not compile. Change it to:

struct u_int3_t Array[100];

or:

CMI Array[100];

Here is an excellent URL that you should read:

http://web.torek.net/torek/c/types2.html
will this give me an array of bitfields?

Yes. But it may not be what you expected.
I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.

What exactly are you trying to accomplish? I think we need further
information to really help you.
 
N

Nick Keighley

is it possible to have pointers to bit fields?
i read somewhere that you can't.

but what if we do this

typedef struct u_int3_t {
unsigned int u_int3_t:3;
} CMI;

u_int3_t Array[100];

will this give me an array of bitfields?

or double array
u_int3_t Array[100][100];

I'm designing a big struct that is going to end up with 1000's of bit
field. i need them to be in arrays or organized somehow otherwise it
will be a @#$%$. space is at a premium so I have to save as many bits
as possible.

abstraction. Write a library (API) that implements an array of
bitfields.

void set_field (unsigned value, int index);
unsigned get_field (int index);

the implementation details can be hidden from the rest of your
application.
The library can be optimised (IF NECESSARY) independently of the app.

Say all the bit fields are of length 3. Then divide by 3 to find the
appropriate
byte and offset (can't be bothered to work out the details). Can your
bitfields
cross byte boundarys? Your choice. If no its faster. If yes its
smaller.

BTW if you use a whole byte for each field you are only using a few
K...
Whats this running in a washing machine controller?


--
Nick Keighley

Programming should never be boring, because anything
mundane and repetitive should be done by the computer.
~Alan Turing
 
M

mazzawi

Thank you for the advice.

I think I should forget about the struct with arrays of structs with
bitfields (what i was trying to do before), and go with one massive
struct of bitfields, because they will pack even closer, even though it
will be ugly.

I'm using gcc 3.2.3 on RHES3 to compile the code. it doesn't need to be
protable to any other OS or arch. so i will go with the
__attribute__((__packed__)).

This isn't going in a washing machine micro controller, but it needs to
packet tightly because were looking at storing 500,000,000 of those
little monsters in a distributed berkelydb
so 1 bit = 60Mb and the entire db is now looking like 5Tb.

having one giant struct with bitfields seems like the most efficient
way because it can pack everything down to the bit.

I'm still weary of using __attribute__((__packed__)) on the struct.
will i be able to access/update the struct's fields like normal as long
as i don't put too large a number in there?
 
K

Keith Thompson

I'm still weary of using __attribute__((__packed__)) on the struct.
will i be able to access/update the struct's fields like normal as long
as i don't put too large a number in there?

(I assume you mean "wary", not "weary".)

gnu.gcc.help is a good place to ask about gcc-specific extensions.
 
N

Nick Keighley

I think I should forget about the struct with arrays of structs with
bitfields (what i was trying to do before), and go with one massive
struct of bitfields, because they will pack even closer, even though it
will be ugly.

I'm using gcc 3.2.3 on RHES3 to compile the code. it doesn't need to be
protable to any other OS or arch. so i will go with the
__attribute__((__packed__)).

but one day you *will* need to port it...

This isn't going in a washing machine micro controller, but it needs to
packet tightly because were looking at storing 500,000,000 of those
little monsters in a distributed berkelydb
so 1 bit = 60Mb and the entire db is now looking like 5Tb.

so why did you say thousands when you meant hundreds of millions?
That's *5* orders of magnitude difference! I tended not to get good
marks on my physics assignments when I was that far out...
having one giant struct with bitfields seems like the most efficient
way because it can pack everything down to the bit.

I'm still weary of using __attribute__((__packed__)) on the struct.
will i be able to access/update the struct's fields like normal as long
as i don't put too large a number in there?

why not write code to pack and unpack the items yourself. The compiler
is going to have to generate pretty similar code to access the bit
fields
anyway.


--
Nick Keighley

We recommend, rather, that users take advantage of the extensions of
GNU C and disregard the limitations of other compilers. Aside from
certain supercomputers and obsolete small machines, there is less
and less reason ever to use any other C compiler other than for
bootstrapping GNU CC.
(Using and Porting GNU CC)
 
M

mazzawi

This isn't going in a washing machine micro controller, but it needs to
so why did you say thousands when you meant hundreds of millions?
That's *5* orders of magnitude difference! I tended not to get good
marks on my physics assignments when I was that far out...

there will be hundreds of millions of the structs with the thousands of
bitfields.

how about something that will compress the data before I store it into
berkelydb. that way i can use normal non bitfield variables ( like
normal poeple ). any suggestions ?
 
W

Walter Roberson

there will be hundreds of millions of the structs with the thousands of
bitfields.
how about something that will compress the data before I store it into
berkelydb. that way i can use normal non bitfield variables ( like
normal poeple ). any suggestions ?

At this point we start getting into algorithms questions rather than
C-specific questions.

In order to provide you with the best advice about compressing the
data, we would have to know the relative number of times that the
data will be written and read, and some kind of information about the
relative amount of time "just being stored" against the time being
read. We would also need some idea about probability distributions
of the bits.

- If data is written once and then re-read many many times,
then it becomes cost effective to have the packing code "work hard",
to make the data compact and easy to read quickly even if computing
it the first time is a pain

- If data is mostly sitting around stored and not looked at very often,
then compactness becomes more important than read time

- If most bits are set the same way or if there would tend to be
large blocks with similar properties, then we might choose different
compression algorithms

But these are all matters of algorithms, and are probably best dealt
with in comp.compression.


I don't know what the data represents, so I'll go ahead and ask:
What is the importance that a particular bit retrieval be correct?
I ask because for -some- applications, efficiencies in time and
storage can be achieved by using probablistic storage rather than
guaranteed storage.

One example of this is the storage of words for spell checking
purposes: you don't need to store the words themselves, you only need
to store information -about- the words, and it does not have to
be fully accurate because the consequences of getting a spell
check slightly wrong are usually not extremely high. So one
approach used in the spell-checking example is to use several
(e.g., 6) different hash algorithms applied to the word, with each
hash algorithm outputting a bit position in the same array.
On storage of a new word, you set the bit at each of those positions.
To check a word you check each of those bit positions; if any of the
bits are *not* set then the word is not in the original list; if all
of the bits -are- set, then either the word was in the original list,
or else there was a probabilistic clash with a set of other words.
The larger the bit array you use, and the more hash functions you use,
the lower the chance of a "false positive" -- but that means
you can trade-off time (number of hashes processed) and space
(size of the bit array) against probability of accuracy (when the
consequences of inaccuracy are not severe.)

But that's an algorithm, and algorithms in general are
comp.algorithms
 
M

mazzawi

Thanks for the advice.

right now I'm experimenting with zlib.h is that a good general purpose
compression library?

I was trying to come up with a new scheme where I just have numbered
states and then just store the number of the state in berkelydb. but I
ended up with too many states. so that won't work.
In order to provide you with the best advice about compressing the
data, we would have to know the relative number of times that the
data will be written and read, and some kind of information about the
relative amount of time "just being stored" against the time being
read. We would also need some idea about probability distributions
of the bits.

will be writted:read, 9:10 so almost as many writes as reads. but space
is more premium than CPU
- If data is mostly sitting around stored and not looked at very often,
then compactness becomes more important than read time

some of the data will become inactive then later cleaned up. about 10%
of the data will be accessed several times daily. ( please don't
recommend cacheing )
- If most bits are set the same way or if there would tend to be
large blocks with similar properties, then we might choose different
compression algorithms

I think there will be a lot of 0's so I believe any general compression
algorithm will perform well.
I don't know what the data represents, so I'll go ahead and ask:
What is the importance that a particular bit retrieval be correct?
I ask because for -some- applications, efficiencies in time and
storage can be achieved by using probablistic storage rather than
guaranteed storage.

95+% accuray would be acceptable I'd have to go negotiate that :p

so far its looking like a giant struct with bitfields that will be
compressed with zlib.h then stored in berkelydb.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads


Members online

No members online now.

Forum statistics

Threads
473,999
Messages
2,570,243
Members
46,836
Latest member
login dogas

Latest Threads

Top