gk245 wrote On 02/10/06 13:04,:
I mean, anything that follows a 0 is automatically turned into a octal
number. I want to have a integer variable that will hold numbers as
integers even if they begin with a zero.
for example:
int 08453
should print 08453 if i tried to use printf() on it, and not some octal
number.
You're mixing up two different things.
First, an integer variable just holds integer values.
As it happens, C requires integer variables to use binary
representation, but that's very nearly irrelevant: if it
were not for a few operators like ^ and >>, whose behavior
is difficult to describe in base three, say, there would
be no need for C to have such a requirement. If you're
not using these "bit-defined" operations, you can (and,
I'd say, should) simply forget about the number base in
most instances. So: an integer variable holds integer
values, not decimal values or octal values.
Second, C programs use various source-code notations
that allow the programmer to specify values. There are
different notations for different types of value (for
example, "abc" is the notation for a string value, 1.5
is the notation for a double value). Some types have
more than one notation: 1.5 and 0.15e1 and 15e-1 are
different ways of writing one-and-a-half as a double
constant. For integers there are three ways[*] to denote
an integer constant: in decimal, octal, or hexadecimal
base. (Actually, there are a few other ways. Without
any intent to ridicule you, I suspect you may not be ready
to learn about them yet and I'd prefer not to add to your
confusion just now.)
The important thing to note is that the form of the
notation is part of the C language. When you start a
number with a zero digit, you have announced to the
compiler that you intend to use octal notation to express
the desired value. Start it with zero and X and you
declare that you're using hexadecimal. Start it with a
non-zero digit and you say you're using decimal. That's
it, the unalterable It: it's the convention of C source
code notations. The conventions are in a sense arbitrary
(some other languages, for example, use notations like
'X'C and 'O'14 -- I have even seen twelve written as
"(2)300", the double quotes being part of the notation).
However, once the conventions are chosen they cannot be
changed. They exist as a means of communicating your
intent to the compiler; if you say one thing but mean
another, the compiler will fail to understand you properly.
Finally, observe that these conventions apply only
in C source code (and to a few library functions). There
is no rule saying that the numbers you print must follow
the same conventions that C source does. For example,
try printf("%05d\n", 8453) and see if you like what you get.