M
mathieu
Hi there,
I am dealing with the following problem:
I need to convert a std::vector<T> (where T can be any interget
type: char, short, ushort) by applying a linear transform (a,b),
following;
output = a*input + b (a & b are floating point type)
Since memory space is important, I am trying to divide the problem
in subcases. Basically
1. If a or b is float then the output vector need to be declared as
vector<float> (input vector is at most 16bits integer type)
2. Is a & b are integer I need to compute the min/max of the input
scalar type, apply the transform and check the output interval to find
which C type is the best match.
Since this look like boilerplate code, I was wondering if there was
anything I could reuse (other than just numeric_limits to find the min
max). Even just the interval calculation is tricky, although in my
case I can just cast eveyrthing to double since I am dealing with at
most 32bits calculation.
thanks for comments,
-Mathieu
I am dealing with the following problem:
I need to convert a std::vector<T> (where T can be any interget
type: char, short, ushort) by applying a linear transform (a,b),
following;
output = a*input + b (a & b are floating point type)
Since memory space is important, I am trying to divide the problem
in subcases. Basically
1. If a or b is float then the output vector need to be declared as
vector<float> (input vector is at most 16bits integer type)
2. Is a & b are integer I need to compute the min/max of the input
scalar type, apply the transform and check the output interval to find
which C type is the best match.
Since this look like boilerplate code, I was wondering if there was
anything I could reuse (other than just numeric_limits to find the min
max). Even just the interval calculation is tricky, although in my
case I can just cast eveyrthing to double since I am dealing with at
most 32bits calculation.
thanks for comments,
-Mathieu