standardisation of data types

I

itsharkopath

Hi,

why were the size and range of data types not standardised in C
language implementation.

shar.
 
J

Jack Klein

Hi,

why were the size and range of data types not standardised in C
language implementation.

shar.

Who says they should have been? Why do you think they should be?
Defend your answer.
 
M

Malcolm

why were the size and range of data types not standardised in C
language implementation.
For efficiency. If, say, char were constrained to be 8 bits, then a machine
with nine bit bytes would have to do an additional operation to make the
statement ch++ work correctly when ch equals 255.

This does have the disadvantage that the same script is not guaranteed to
work the same when compiled for a different platform.
 
K

Keith Thompson

Malcolm said:
For efficiency. If, say, char were constrained to be 8 bits, then a machine
with nine bit bytes would have to do an additional operation to make the
statement ch++ work correctly when ch equals 255.

This does have the disadvantage that the same script is not guaranteed to
work the same when compiled for a different platform.

"Script"? The term script usually refers to a program fed to an
interactive shell; "program" is more common for C.
 
I

itsharkopath

Hi,

I was thinking, that would provide higher level of abstraction
and thus would make programming easy without being worried about the
value limits etc.
i.e. Isn't it easy to code if you know what value the variable is
capable of holding.

shar.
 
M

Mike Wahler

Hi,

I was thinking, that would provide higher level of abstraction
and thus would make programming easy without being worried about the
value limits etc.
i.e. Isn't it easy to code if you know what value the variable is
capable of holding.

That's what the macros in <limits.h> are for.
See e.g. 'CHAR_MAX', 'INT_MAX', 'LONG_MAX', etc.
In the interest of flexibility (i.e. suppporing widest
possible range of platforms), the language leaves these
exact limits (subject to minima) up to each implementation.
I.e. a machine which can handle e.g. 64-bit integers can
use those for 'int', but a machine which cannot can still
define a usable 'int' type.

-Mike
 
M

Malcolm

I was thinking, that would provide higher level of abstraction
and thus would make programming easy without being worried about the
value limits etc.
There's certainly an argument for it. Some languages, like java, have chosen
this path. However java backtracked slightly. Floating-point values
originally had a set bit pattern, which meant that float ops had to be
emulated in software on machines lacking the precise hardware configuration.
This was very slow, so they loosened the language specification slightly to
permit these machines to use their hardware registers.
i.e. Isn't it easy to code if you know what value the variable is
capable of holding.
In reality it is only a slight problem. The worst effect is that it is not
possible to put a portable binary "serialise" function into the language,
which would be very useful. Programs can be theoretically incorrect but in
practise OK. For instance it is perfectly plausible that a machine might
have 16 bit ints, it is also plausible that a company might have more than
32767 employees. However it is not plausible that such a company would run
its payroll on a machine with 16-bit ints. So a program that uses a int to
hold the employee count will never in fact be compiled in an environment
where it breaks.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Similar Threads

Unsinged types 35
Looping for checking input integer 2
Types 58
How to get started with C programming: 1
C99 integer types 24
Types 13
Unsinged types 11
Types in C 117

Members online

Forum statistics

Threads
474,156
Messages
2,570,878
Members
47,408
Latest member
AlenaRay88

Latest Threads

Top