I was thinking, that would provide higher level of abstraction
and thus would make programming easy without being worried about the
value limits etc.
There's certainly an argument for it. Some languages, like java, have chosen
this path. However java backtracked slightly. Floating-point values
originally had a set bit pattern, which meant that float ops had to be
emulated in software on machines lacking the precise hardware configuration.
This was very slow, so they loosened the language specification slightly to
permit these machines to use their hardware registers.
i.e. Isn't it easy to code if you know what value the variable is
capable of holding.
In reality it is only a slight problem. The worst effect is that it is not
possible to put a portable binary "serialise" function into the language,
which would be very useful. Programs can be theoretically incorrect but in
practise OK. For instance it is perfectly plausible that a machine might
have 16 bit ints, it is also plausible that a company might have more than
32767 employees. However it is not plausible that such a company would run
its payroll on a machine with 16-bit ints. So a program that uses a int to
hold the employee count will never in fact be compiled in an environment
where it breaks.