Not sure, it just seems weird in C.
Is it correct to say that every time the compiler sees 'edge' it
replaces those four letters with 'struct { int v; int weight; }'
?
A typedef name does not behave like a naive preprocessor macro. It does behave
like a name which represents a type, when it it is mentioned in the right
context, in a scope where it is active. The preprocessing stages are blind to
such context.
But could typedefs be processed within the compiler by a (Lisp or Scheme-like)
macro-expansion process over an abstract syntax tree?
Note that in the following example, x and x have different types:
struct { int foo; } x;
struct { int foo; } y;
The structs are anonymous (do not have a tag), so each one is
a unique type. But in the following example, x and y have the same type:
typedef struct { int foo; } foo_t;
foo_t x;
foo_t y;
If occurences of the foo_t type are replaced by the struct in a naive way,
or in the wrong processing stage in the compiler, this will break; x and y
will have different types.
In order for x and y to have the same type above, foo_t must be bound to the
unique type that comes from reducing the tree node "struct {int foo;}" to that
unique type. Objects declared using foo_t are then declared using that unique
type, and not the original syntax from which it came.
This is why I'd hesitate to call it macro-expansion; the substitution has to be
done after some semantic translation has already happened, rather than upfront.
Macro expansion generally means that we manipulate the syntax of a language
before doing anything semantically deep with it; (at least in a logical sense,
macro expansion could be interleaved with semantic actions, just not for the
same items).
Since typedef names give us type equivalence which the corresponding syntax
replacement does not, we have to conclude that the binding of typedef names is
to a representation of type that is somewhat deeper than surface syntax; hence
typedef names are more like variables, than macros.