J
jacob navia
Background:
The C99 committee decided NOT to use existing practice and defined a new
notation for complex numbers:
Instead of
double complex foo = 1.2+2.3i;
that is used in gcc
the standard decided to introduce a NEW notation:
double complex foo = 1.2+2.3*I;
-----------------------------------------------------------------
The standard says in 7.3.1.3
The macro
I
expands to either _Imaginary_I or _Complex_I.
OK
It says too:
The macro _Complex_I expands to a constant expression of type const
float _Complex, with the value of the imaginary unit.
OK
Now, there is a small problem...
How do I define the imaginary unit?
I can't say
float _Complex _Complex_I = 0.0f + 1.0f*I;
as it would recurse!
THEN
The header complex.h MUST use non-standard notation to define
_Complex_I.
Obviously the solution is to write:
float _Complex _Complex_I = 1.0i;
(as gcc does in their header file complex.h)
I still do not understand why the committee decided not to use existing
practice of adding an 'i' suffix to the complex numbers and decided to
create a new notation full of problems:
(1) This notation depends on a header file complex.h to be able to
recognize a complex constant. If you do not include the header
file the literal complex constants wont be recognized.
(2) If the user redefines the macro "I", complex numbers can't be parsed
anymore. Note that the standard explicitly allows for redefinition
of "I". Go figure.
(3) All header files "complex.h" *must* use NON STANDARD notation to
define the _Complex_I variable.
(4) All programs that define complex constants are assuming that the
_Complex_I variable doesn't change to be able to simplify the
expression
double complex foo = 2.3+5.8*I;
If a compiler simplifies this to a single complex number calculated at
compile time it is assuming that the value of _Complex_I doesn't change.
If we put _Complex_I as
#define _Complex_I (1.0i)
that will work but then we are expanding non-standard notation all over!
Since the standard implicitly assumes the I suffix notation then:
WHY DO WE HAVE TO DO ALL THIS CONTORTIONS?
Why can't we write
double complex foo = 1.2+2.3i;
and be done with it???
That would obviate the need to define the macro "I" and to have
_Complex_I and all that absurd machinery.
Why make it more complicated? Can't we simplify this and use
the suffix notation?
The C99 committee decided NOT to use existing practice and defined a new
notation for complex numbers:
Instead of
double complex foo = 1.2+2.3i;
that is used in gcc
the standard decided to introduce a NEW notation:
double complex foo = 1.2+2.3*I;
-----------------------------------------------------------------
The standard says in 7.3.1.3
The macro
I
expands to either _Imaginary_I or _Complex_I.
OK
It says too:
The macro _Complex_I expands to a constant expression of type const
float _Complex, with the value of the imaginary unit.
OK
Now, there is a small problem...
How do I define the imaginary unit?
I can't say
float _Complex _Complex_I = 0.0f + 1.0f*I;
as it would recurse!
THEN
The header complex.h MUST use non-standard notation to define
_Complex_I.
Obviously the solution is to write:
float _Complex _Complex_I = 1.0i;
(as gcc does in their header file complex.h)
I still do not understand why the committee decided not to use existing
practice of adding an 'i' suffix to the complex numbers and decided to
create a new notation full of problems:
(1) This notation depends on a header file complex.h to be able to
recognize a complex constant. If you do not include the header
file the literal complex constants wont be recognized.
(2) If the user redefines the macro "I", complex numbers can't be parsed
anymore. Note that the standard explicitly allows for redefinition
of "I". Go figure.
(3) All header files "complex.h" *must* use NON STANDARD notation to
define the _Complex_I variable.
(4) All programs that define complex constants are assuming that the
_Complex_I variable doesn't change to be able to simplify the
expression
double complex foo = 2.3+5.8*I;
If a compiler simplifies this to a single complex number calculated at
compile time it is assuming that the value of _Complex_I doesn't change.
If we put _Complex_I as
#define _Complex_I (1.0i)
that will work but then we are expanding non-standard notation all over!
Since the standard implicitly assumes the I suffix notation then:
WHY DO WE HAVE TO DO ALL THIS CONTORTIONS?
Why can't we write
double complex foo = 1.2+2.3i;
and be done with it???
That would obviate the need to define the macro "I" and to have
_Complex_I and all that absurd machinery.
Why make it more complicated? Can't we simplify this and use
the suffix notation?