You seem to put a lot of stock in the effortlessness of boilerplate,
yet you prefer a language that is said to reduce the need for
boilerplate.
Not all boilerplate is created equal. In particular, some boilerplate
is, not only easy to glance at and understand, but also, and more
important, easy to code from first principles without knowing arcane
corners of the language your are coding in.
OK, so you mention that you could write a script to automate all of
that, but to work, it would depend on a specific non-standard, non-
enforceable naming convention. Not to mention this script has yet to
be written and offered to the public, for free or fee. Which means
each of those who would follow your advice must write, test, run and
maintain their own scripts (or maybe even sell it to the rest of us,
if they felt there was a market).
That's a good point. I have some languishing tools for this (because
the boilerplate is never quite bad enough to work on the tools some
more) that I should clean up and publish.
Alas, we have no such scripts. So that would put most users back at
typing out all that boilerplate. Once it is typed, there is no
compiler to check it for you (unlike much of the boilerplate often
attributed to vhdl).
Well, actually, the stock verilog tools do a pretty darn good job
these days.
What's really silly is how the two-process code model even got
started. The original synthesis tools could not infer registers, so
you had to instantiate them separately from your combinatorial code.
Once the tools progressed, and could infer registers, the least impact
to the existing coding style (and tools) was simply to replace the
code that instantiated registers with code that inferred them, still
separating that code from the logic code.
That may well be. Nonetheless, many people found the two process
method better, even before 'always @*' or the new systemverilog
'always_comb', to the point where they maintained ungodly long
sensitivity lists. Are you suggesting that none of those people were
reflective enough to try to figure out if that was the best way (for
them) to code?
Finally, someone (God bless them!) figured out how to do both logic
and registers from one process with less code, boilerplate or not.
Yes, and the most significant downside to this is that the access to
the 'before clock' and 'after clock' versions of the same signal is
implicit, and in fact, in some cases (e.g. if you use blocking
assignments) you have access to more than two different signals within
a process, all with the same name. There is no question that in many
cases this is not an issue and the one process model will work fine.
But I think most who do serious coding with the 'one process' model
will, at least occasionally, wind up having two processes (either a
separate combinatorial process, or two interrelated sequential
processes) to cope with not having an explicit delineation of 'before
clock' and 'after clock'.
At the end of the day, it is certainly desirable to have something
that looks more like the 'one process' model, but that gives explicit
access to 'previous state' and 'next state', so that complicated
combinatorial logic with interrelated variables can always be
expressed inside the same process without resorting to weird code
ordering that is done just to make sure that the synthesizer and
simulator will create the structures you want.
For all your staunch support of this archaic coding style, we still
have not seen any examples of why a single process style did not work
for you. Instead of telling me why the boilerplate's not as bad as I
think it is, tell me why it is better than no boilerplate in the first
place.
A paper I have mentioned in other posts,
http://www.sunburst-design.com/papers/CummingsSNUG2000SJ_NBA.pdf gives
some good examples why the general rule of never having blocking
assignments in sequential blocks is a good practice. I have seen some
dismiss this paper here, but haven't seen a technical analysis about
why it's wrong. The paper itself speaks to my own prior experience,
and I also feel that related variables should be processed in the same
block. When I put this preference together with the guidelines from
the paper, it turns out that a reliable, general way to achieve good
results without thinking about it too hard is to use the two process
model.
But, if you can tell me that you *always* manage to put *all* related
variables in the same sequential block, and *never* get confused about
it (and never confuse any co-workers), then, as with KJ and Bromley
and some others, you have no reason to consider the two process
model. OTOH, if you sometimes get confused, or have easily confused
co-workers, and/or find yourself using multiple sequential processes
to model variables where multiple processes have references to
variables in other processes, then you might want to consider whether
slicing related functionality into processes in this fashion is really
better than slicing the processes in a manner where you keep all the
related functional variables together in a single combinatorial
process, and simply extract out the registers into a very-well
understood model.
At the end of the day, I am willing to concede that the two process
model is, at least partly, a mental crutch. Am I a mental cripple?
In some respects, almost certainly. But on the off-chance that I am
not the only one, I tolerate a certain amount of abuse here in order
to explain to others who may also be easily confused that there are
other coding styles than the single process model.
I will also concede that the single process model can be beefed up
with things like tasks or functions (similar to what Mike Treseler has
done) to overcome some of the shortcomings. However, personally, I
don't really find that to be any better than putting combinatorial
stuff in a separate process.
Regards,
Pat