A
Andy
Clock cycle matters, but is a second-order issue. The way we grade
things about 20% of the grade is for performance, and that is
basically measured by how long it takes the various testbenches to
run. Good architecture can get you there (lots of CPI) but clock
cycle obviously plays a big role too.
I assume you are talking about simulated time (i.e. simulated clock
cycles), not wall time running the simulation? Do the students get to
optimize the clock speed?
While RTL coding style does not have a direct impact on simulated time
performance, it does impact wall time spent in simulation. Faster
simulations (wall time) yield more time for debugging, "what if"
evaluations, etc., which then can improve simulated time performance.
Single process coding styles simulate faster than dual process
(combinatorial & clocked) styles. Most modern simulators "merge"
multiple processes that share the same sensitivity list, saving
overhead. Single process coding styles maximize this optimization,
since all the processes are sensitive to the same clock and async
reset signals. Combinatorial processes rarely share complete
sensitivity lists, so there is little optimization.
Integer types simulate MUCH faster than slv/signed/unsigned. Use
multiply/divide for shifting, divide/mod for slicing, and multiply/add
for concatenation. The synthesis tool will reduce it down to the
shifts & masks just fine. If their any good at writing low level SW,
they'll be right at home anyway.
Variables are slightly faster than signals that are used only locally,
but not by much with a good simulator.
Andy