G
GGP
I have an app that receives input from a user, executes a calculation,
and displays a numerical result. The user may choose (or so the
theory goes) the number of decimal places s/he prefers in the output
(ranges between 0 and 10). However, I'm having a tough time figuring
out how to do this.
The input is a text box, and the text value is converted to a
BigDecimal. If the result is, say, 123.4567 and the user wishes to
limit the decimal places to three, my code works well, and the output
would be 123.457. That works. However, if the result is a very large/
small number I can't seem to get the proper output (for very large
numbers, the output I get is 0,000,000 and for very small numbers it's
just 0).
My goal is to set a limit for very large/small numbers where, once
reached, the result format will switch to scientific notation while
still respecting the user's choice for number of decimal places.
Here's some of my code:
int precision = 3; //this is read from a UI control and
successfully converted to an int (= no. decimals)
DecimalFormat decimals = new DecimalFormat();
Double dbl = null;
Double ddbl = dbl.parseDouble(txtbxConvertFrom.getText());
BigDecimal convertValue = BigDecimal.valueOf(ddbl);
Converter converter = new Converter(parameters);
BigDecimal convertedResult = converter.getConvertedUnit(); //This
works fine
decimals.setMaximumFractionDigits(precision);
decimals.setMaximumIntegerDigits(7);
txtbxConvertTo.setText(decimals.format(convertedResult).toString());
//This last line results in the semi-erroneous behaviour cited above.
I've also tried working with 'new MathContext(precision)', which
successfully switches to scientific notation when the number is too
big or too small, but the output uses the variable 'precision' to set
the significant figures, not the decimal places (e.g., a result might
be 1.23E6, for example, which appropriately has 3 significant figures,
but only two decimal places where three is needed).
My question is, how can I obtain a result that consistently respects
the user-selected number of decimal places, even after switching to
scientific notation?
BTW, there seems to be far too many possibilities to simply apply a
particular format (unless there's some way to do this from code--which
I've tried rather unsuccessfully), such as #.## (or whatever).
Thanks for considering this. Sincerely,
Greg.
and displays a numerical result. The user may choose (or so the
theory goes) the number of decimal places s/he prefers in the output
(ranges between 0 and 10). However, I'm having a tough time figuring
out how to do this.
The input is a text box, and the text value is converted to a
BigDecimal. If the result is, say, 123.4567 and the user wishes to
limit the decimal places to three, my code works well, and the output
would be 123.457. That works. However, if the result is a very large/
small number I can't seem to get the proper output (for very large
numbers, the output I get is 0,000,000 and for very small numbers it's
just 0).
My goal is to set a limit for very large/small numbers where, once
reached, the result format will switch to scientific notation while
still respecting the user's choice for number of decimal places.
Here's some of my code:
int precision = 3; //this is read from a UI control and
successfully converted to an int (= no. decimals)
DecimalFormat decimals = new DecimalFormat();
Double dbl = null;
Double ddbl = dbl.parseDouble(txtbxConvertFrom.getText());
BigDecimal convertValue = BigDecimal.valueOf(ddbl);
Converter converter = new Converter(parameters);
BigDecimal convertedResult = converter.getConvertedUnit(); //This
works fine
decimals.setMaximumFractionDigits(precision);
decimals.setMaximumIntegerDigits(7);
txtbxConvertTo.setText(decimals.format(convertedResult).toString());
//This last line results in the semi-erroneous behaviour cited above.
I've also tried working with 'new MathContext(precision)', which
successfully switches to scientific notation when the number is too
big or too small, but the output uses the variable 'precision' to set
the significant figures, not the decimal places (e.g., a result might
be 1.23E6, for example, which appropriately has 3 significant figures,
but only two decimal places where three is needed).
My question is, how can I obtain a result that consistently respects
the user-selected number of decimal places, even after switching to
scientific notation?
BTW, there seems to be far too many possibilities to simply apply a
particular format (unless there's some way to do this from code--which
I've tried rather unsuccessfully), such as #.## (or whatever).
Thanks for considering this. Sincerely,
Greg.