I'm not looking for synthesizable code. I'm just modeling some floating point instructions for a CPU.
Ultimately, I want to take a 32-bit std_logic_vector(0 to 31), interpret it as a standard single precision floating point number, use +,-,* operators and then store the result as a 32-bit std_logic_vector(0 to 31).
I also want to do the same for a 64-bit std_logic_vector(0 to 63) and interpret it as a double precision floating point number.
I was thinking of using the float_pkg package in ModelSim. I haven't looked into it too deeply yet but I was wondering if that package should be sufficient enough for me.
Any input is appreciated. Thanks in advance!
Ultimately, I want to take a 32-bit std_logic_vector(0 to 31), interpret it as a standard single precision floating point number, use +,-,* operators and then store the result as a 32-bit std_logic_vector(0 to 31).
I also want to do the same for a 64-bit std_logic_vector(0 to 63) and interpret it as a double precision floating point number.
I was thinking of using the float_pkg package in ModelSim. I haven't looked into it too deeply yet but I was wondering if that package should be sufficient enough for me.
Any input is appreciated. Thanks in advance!