R
rickman
Sometimes I wonder if HDLs are really the right way to go. I mainly
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations. But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.
What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner? It seems like every time I want to design a circuit I
have to experiment with the exact style to get the logic I want and it
often is a real PITA to make that happen.
For example, I wanted a down counter that would end at 1 instead of 0
for the convenience of the user. To allow a full 2^N range, I thought
it could start at zero and run for the entire range by wrapping around
to 2^N-1. I had coded the circuit using a natural range 0 to
(2^N)-1. I did the subtraction as a simple assignment
foo <= foo -1;
I fully expected that even if it were flagged as an error in
simulation to load a 0 and let it count "down" to (2^N)-1, it would
work in the real world since I stop the down counter when it gets to
1, not zero. Loading a zero in an N bit counter would work just fine
wrapping around.
But to make the simulation the same as the real hardware I expected to
get, I thought adding some simple code to handle the wrap around might
be good. So the assignment was done modulo 2^N. But the synthesis
size blew up to nearly double the size without the "mod" function
added, mostly additional adders! I didn't have time to explore what
caused this so I just left out the modulo operation and will live with
what I get for the case of loading a zero starting value.
I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?
Rick
use VHDL which we all know is a pig in many ways with its verbosity
and arcane type conversion gyrations. But what bothers me most of all
is that I have to learn how to tell the tools in their "language" how
to construct the efficient logic I can picture in my mind. By
"language" I don't mean the HDL language, but actually the specifics
of a given inference tool.
What I mean is, if I want a down counter that uses the carry out to
give me an "end of count" flag, why can't I get that in a simple and
clear manner? It seems like every time I want to design a circuit I
have to experiment with the exact style to get the logic I want and it
often is a real PITA to make that happen.
For example, I wanted a down counter that would end at 1 instead of 0
for the convenience of the user. To allow a full 2^N range, I thought
it could start at zero and run for the entire range by wrapping around
to 2^N-1. I had coded the circuit using a natural range 0 to
(2^N)-1. I did the subtraction as a simple assignment
foo <= foo -1;
I fully expected that even if it were flagged as an error in
simulation to load a 0 and let it count "down" to (2^N)-1, it would
work in the real world since I stop the down counter when it gets to
1, not zero. Loading a zero in an N bit counter would work just fine
wrapping around.
But to make the simulation the same as the real hardware I expected to
get, I thought adding some simple code to handle the wrap around might
be good. So the assignment was done modulo 2^N. But the synthesis
size blew up to nearly double the size without the "mod" function
added, mostly additional adders! I didn't have time to explore what
caused this so I just left out the modulo operation and will live with
what I get for the case of loading a zero starting value.
I guess what I am trying to say is I would like to be able to specify
detailed logic rather than generically coding the function and letting
a tool try to figure out how to implement it. This should be possible
without the issues of instantiating logic (vendor specific, clumsy,
hard to read...). In an ideal design world, shouldn't it be pretty
easy to infer logic and to actually know what logic to expect?
Rick