Marcus Harnisch said:
But that's not really simulation threads. Other vendors claim similar
things (wave form dump in a separate thread).
Synopsys talks about both Application Level Parallelism (ALP) and
Design Level Parallelism (DLP). The latter is simulation threads. The
former might not be. However, I haven't used this version of VCS so I
can't verify what Synopsys are saying in their FAQ's, press releases
etc.
I guess the "If" is a significant issue. The analysis might be
Yes. It's easy to imagine a design consisting of two small modules
where one input is fed into the other and vice versa. Both depend upon
the others output and the latency would hurt the performance and you
would probably not split it across two cores/processors.
However a design where a testbench is generating stimuli for the DUT
and the data are all inputs to the DUT it would be feasible to split
the two across multiple processors depending upon the bandwidth of the
data to go from the stimuli generator to the DUT.
The analysis is costly and it might be difficult to determine on
compile time in many cases, e.g. the toggling frequency of some input
might be a function of external data.
difficult enough I gather. Another requirement in simulation is the
capability to rerun a test *exactly* the same way it was executed
before. Having the simulation run in different threads in an
inherently non-deterministic environment (OS, other processes) and
putting these threads into a deterministic execution sequence almost
contradicts itself. I am sure EDA vendors are racking their heads for
a solution to this.
I can't see why it's so difficult to keep track of thread statistics
and synchronization points (should probably be simulator option) so
you can re-run the simulation on the same processors etc. but possibly
resulting in lower performance since the loads of the other processors
might be different from the previous runs.
Petter