I would advice against a reset signal in input synchronization blocks. You
are trying to generate in internal signal (sync_sig_2) which follows an
external asynchronous signal (sig) and filters out most of the
metastability and other asynchronous race conditions. Adding a reset signal
will only delay the input synchronization time during reset without any
added advantage.
On the contrary, I fail to see any clear disadvantage to this method.
Delaying the synchronized signal by further two clocks after a reset
which is typically very long (usually these resets come either from a
uC or a voltage supervisor chip, and in both cases are long) doesn't
really matter much.
However, I see two advantages:
1) Reset can be a sign that there is trouble, especially with the
voltage supply. A good voltage supervisor chip will drive the reset_n
low if it detects a brownout. If my synchronized signal drives some
critical output, I want to know it's in a safe ground state when there
is a reset.
2) Reset into every FF makes simulation easier without any 'X'es
And correct me if I'm wrong, but in FPGAs these resets are free, all
FFs have 'clear' inputs with routing of global fast signals. And based
on the link you have provided (below), I understand that FPGAs are
your intereset as well.
Beside adding complexity for no reason, an asynchronous reset will make
sync_sig_2 asynchronous. Ken Chapman published a very interesting article
about reset strategies in FPGA designs called "Get Smart About Reset (Think
Local, Not Global)". You can find it on Xilinx's website athttp://
www.xilinx.com/xlnx/xweb/xil_tx_display.jsp?iLanguageID=1&mult...
This is an interesting article but it addresses a completely different
matter, one that springed up in this group a few times. It is
generally agreed that it's a good practice to make the reset_n release
synchronous.
Kind regards
Eli