A
Andy
KJ said:I'm not sure that 'maximize use of variables' is any sort of useful
metric (function/performance/power/code maintainability are more
useful), but I agree with you and the original post author that what
was posted is an improvement over using two processes and belongs in
the bag of tricks.
Maximizing use of variables also maximizes RTL simulation performance,
since there is far less overhead for variable update and access than
for that of signals. More simulation performance relates to more corner
cases hit, and more bugs found.
But the original post also mentioned this in the context of this being
a good way to avoid unwanted gated clocks. And in my original post I
simply mentioned that an asynchronously resetable shift register used
to generate the reset signal to everything else in the design and then
using synchronous resets throughout the design avoids the entire
situation entirely and in most cases costs darn near nothing and
performs virtually the same.
Agreed, but the devil's in the details of "most" and "virtually".
The only functional difference between the two is that PRIOR to that
first rising edge of the clock the outputs are in a different state.
AFTER the first rising edge everything is the same. The reset signal
itself can come when the clock is shut off, it's just the result of
that reset that doesn't show up until the clock does start.
Maybe I'm being picky, but that can be a big difference! They are NOT
the same, particularly when meeting requirements in the absence of a
clock!
As a practical matter, that functional difference is generally of no
importance....for the simple reason that the reason that the clock
isn't running is usually because something has knowingly shut it off
(i.e. maybe to conserve power). In any case, that thing that controls
the clock certainly knows to ignore the outputs of a function that it
is not actively using so the fact that the outputs aren't the way you
think they should be really doesn't matter darn near all the time.
Maybe "generally" and "usually" in your work, but not in mine!
If you think the slight functional difference is important because this
signal is a 'really important' signal that absolutely must come up
correctly (i.e. launch the missles) than think again. Before any
properly designed system would turn over control of that 'really
important' signal in the first place it would first test it to make
sure that it is working correctly (i.e. no false launches...no missed
launch commands). Only then would it allow that signal to control that
'really important' signal....and it would only do so after starting the
clock because the designer realizes that the outputs become valid after
the clock, not before.
This is not about proper initialization at startup, though those
problems are often the result of improperly handling (synchronizing)
reset inputs, no matter whether the end circuit is designed with an
async or a sync reset.
If the clock isn't running because it is just busted than maybe the
slight functional difference does become important but only if it
prevents the system from properly diagnosing what field replacable unit
needs replacing or being able to route around the failing component.
This is the most common root cause for the requirement for
predictable/safe behavior in the absense of a clock: a failed clock
input. If the system is designed to shut off the clock, then not
handling that would be a design defect. When system outputs are used
to directly control other things that can destroy themselves (or
destroy something else) if not actively controlled (motor servo loops
are just one minor example), then performance w/o a clock becomes
vitally important, particularly in medical, automotive, and military
applications where human lives are at stake.
Think you meant "the option of a sync reset"
Either one: ram and shift register primitives are a good example.
Agreed....keeping in mind that using async resets requires more 'skill'
(for lack of a better word) than sync resets.
Once the deasserted edge is handled, there is no more or less skill
invovled in using async vs sync resets. Handling the deasserting edge
is no more or less difficult than properly synchronizing a reset input
in the first place. Both have to be done for each clock domain. If
there is no more skill involved, why not use the "safest" approach,
which guarantees performance even in the absence of a clock. Now, if it
were for an ASIC, where flop primitives with async resets come at a
real-estate, if not performance, disadvantage, then by all means use
the sync reset (which usually gets munged in with the gates anyway) if
there are no requirements for safing the outputs in the absence of the
clock.
Even if it were more difficult, that's why they pay us the big bucks!
Andy