Hello,
The testbench is for a complex multi-functional FPGA. This FPGA has
several almost unrelated functions, which it makes sense to check
separately. So I'm in doubt as to how to structure my testbench. As I
see it, there are two options:
1) One big testbench for everything, and some way to sequence the
tests of the various functions
2) Separate testbenches, one for each function
Each approach has its pros and cons. Option (2) sounds the most
logical, at least at first, but when I begin pursuing it, some
problems emerge. For example, too much duplication between testbenches
- in all of them I have to instantiate the large top-level entity of
the FPGA, write the compilation scripts for all the files, and so on.
I would have a separate testbench for each of these individual
functions, however they would not be instantiating the top level FPGA
at all. Each function presumably has it's own 'top' level that
implements that function, the testbench for that function should be
putting that entity through the wringer making sure that it works
properly. Having a single instantiation of the top level of the
entire design to test some individual sub-function tends to
dramatically slow down simulation which then produces the following
rather severe consequences:
- Given an aribtrary period of wall clock time, less testing of a
particular function can be performed.
- Less testing implies less coverage of oddball conditions.
- When you have to make changes to the function to fix a problem that
didn't happen to show up until the higher level testing was performed,
regression testing will also be hampered by the above two points when
trying to verify that in fixing one problem you didn't create another.
As a basic rule, a testbench should be testing new design content that
occurs roughly at that level not design content that is buried way
down in the design. At the top level of the FPGA design, the only new
design content is the interconnect to the top level functions and the
instantiation of all of the proper components.
Also, I wouldn't necessarily stop at the FPGA top level either. The
FPGA exists on a PCBA with interconnect to other parts (all of which
can be modelled), and the PCBA exists in some final system that simply
needs power and some user input/output (which again can be modelled).
All of this may seem to be somewhat of a side issue but it's not.
Take for example, the design of a DDR controller as a sample design.
That controller should have it's own testbench that rather extensively
tests all of the operational modes. At the top level of the FPGA
though you would simply instantiate it. Exhaustively testing again at
that level would be pretty much wasted time. A better test would be
to model the PCBA (which would instantiate a memory model from the
vendor) and walk a one across the address and data bus since that
effectively verifies the interconnect which at the FPGA top level and
the PCBA level is all the new design content that exists in regards to
the DDR controller.
As another example, let's say your system processes images and
produces JPEG output and you're writing all the code yourself. You
would probably want some extensive low level testing of some of the
basic low level sub-functions like...
- 1d DCT transform
- 2d DCT transform
- Huffman encoding
- Quantizer
Another level of the design would tie these pieces (and all of the
other pieces necessary to make a full JPEG encoder) and test that the
whole thing compresses images properly. At that level, you'd probably
also be extensively varying the image input, the Q tables, the Huffman
tables, the flow control in and out and run lots of images through to
convince yourself that everything is working properly. But you would
be wasting time (and wouldn't really be able to) vary all of the
parameters to those lower level DCT functions since they would likely
be parameterized for things like input/output data width and size but
in the JPEG encoder it doesn't matter if there is a bug that could
only affect an 11x11 element 2d DCT since you're using it in an 8x8
environment. That lower level testbench is the only thing that could
uncover the bug in the 11x11 case but if you limit yourself to
testbenches that can only operate at some higher level you will be
completely blind and think that your DCT is just fine until you try to
use it with some customer who wants some 11x11 DCT for whatever
reason.
Similarly, re-testing those same things that you do vary at the 'image
compression' level at the next higher level would then be mostly
wasted time that could've been better spent somewhere else. Often
times integration does produce conditions that were not considered at
the lower functional testbench level but to catch that what you need
then is to have something that predicts the correct response given the
testbench input and assert on any differences.
Testing needs to occur on various levels, not one (even if that one
level is able to effectively disable all the other functions but the
one is currently interested in). Ideally this testing occurs at every
level where significant new design content is being created. At the
level where 'insignificant' (but still important) new content is
created (like the FPGA top level which simply instantiates and
interconnects), you can generally get more bang for the buck by going
up yet another level (to the PCBA).
Kevin Jennings