J
Jim Berry
Hi.
Having taken the plunge and bought the starter board and read as many
tutorials as I could find and played with example code, I want to nail
down some VHDL basics before I proceed.
I've noticed all sorts of different uses of "standard" IEEE libraries,
and so I read through them (as distributed with the Xilinx ISE) and sorta
think I get it, but would appreciate it a lot if you would let me know if
I'm more-or-less on the mark with the following statements:
- std_logic_1164 defines basic multi-valued logic types STD_LOGIC and
STD_LOGIC_VECTOR and defines logic functions and operators for them.
- std_logic_arith defines SIGNED and UNSIGNED vectors (from STD_LOGIC)
and basic arithmetic for them. Does NOT define arithmetic for
STD_LOGIC_VECTOR
- std_logic_signed defines arithmetic for STD_LOGIC_VECTOR which assumes
it should always be implicitly treated as a SIGNED
- std_logic_unsigned same as above, but implicitly treats
STD_LOGIC_VECTOR as UNSIGNED. Needless to say, you can't use them both.
- numeric_std is completely separate from the above (other than
std_logic_1164) and defines SIGNED and UNSIGNED and associated arithmetic
funcs much like std_logic_arith does. It is an actual IEEE standard.
- numeric_bit is pretty much the same as numeric_std, but uses BIT
instead of STD_LOGIC as basis for the SIGNED and UNSIGNED types. Use
either numeric_std or numeric_bit depending on what you use as your basic
signal type.
It appears that folks pretty much always use std_logic_1164, and then use
either the std_logic_[foo] libraries or the numeric_[foo] ones.
Given that I'm not particularly fond of implicit stuff in code in
general, and don't really think arithmetic on STD_LOGIC_VECTORs makes all
that much sense semantically, I'm inclined to go with just making a habit
of using std_logic_1164 and numeric_std and explicitly using SIGNED and
UNSIGNED (or at least type casts) when I want to do arithmetic.
Good idea, bad idea?
Thanks,
-jim
Having taken the plunge and bought the starter board and read as many
tutorials as I could find and played with example code, I want to nail
down some VHDL basics before I proceed.
I've noticed all sorts of different uses of "standard" IEEE libraries,
and so I read through them (as distributed with the Xilinx ISE) and sorta
think I get it, but would appreciate it a lot if you would let me know if
I'm more-or-less on the mark with the following statements:
- std_logic_1164 defines basic multi-valued logic types STD_LOGIC and
STD_LOGIC_VECTOR and defines logic functions and operators for them.
- std_logic_arith defines SIGNED and UNSIGNED vectors (from STD_LOGIC)
and basic arithmetic for them. Does NOT define arithmetic for
STD_LOGIC_VECTOR
- std_logic_signed defines arithmetic for STD_LOGIC_VECTOR which assumes
it should always be implicitly treated as a SIGNED
- std_logic_unsigned same as above, but implicitly treats
STD_LOGIC_VECTOR as UNSIGNED. Needless to say, you can't use them both.
- numeric_std is completely separate from the above (other than
std_logic_1164) and defines SIGNED and UNSIGNED and associated arithmetic
funcs much like std_logic_arith does. It is an actual IEEE standard.
- numeric_bit is pretty much the same as numeric_std, but uses BIT
instead of STD_LOGIC as basis for the SIGNED and UNSIGNED types. Use
either numeric_std or numeric_bit depending on what you use as your basic
signal type.
It appears that folks pretty much always use std_logic_1164, and then use
either the std_logic_[foo] libraries or the numeric_[foo] ones.
Given that I'm not particularly fond of implicit stuff in code in
general, and don't really think arithmetic on STD_LOGIC_VECTORs makes all
that much sense semantically, I'm inclined to go with just making a habit
of using std_logic_1164 and numeric_std and explicitly using SIGNED and
UNSIGNED (or at least type casts) when I want to do arithmetic.
Good idea, bad idea?
Thanks,
-jim