--------------enig8C6C42CEB1626E5D079965E7
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Well, manually applying BigDecimal wherever necessary for
performance reasons feels like programming in C (remember floats and
doubles? /me vomits).
=20
Yes, but there's performance losses, and performance losses. For the
fixnum -> bignum conversion, there's a whole of two cutover numbers
where you have to convert from one to the other. For a Float ->
BigDecimal conversion, there's probably an infinity of them.
Also:
---
require 'bigdecimal'
require 'benchmark'
N =3D 1000000
Benchmark.bmbm { |test|
test.report("Float") {
for i in 1..N
Math:
I / Math::E
end
}
test.report("BigDecimal") {
for i in 1..N
BigDecimal.new(String(Math:
I)) /
BigDecimal(String(Math::E))
end
}
}
---
Rehearsal ----------------------------------------------
Float 0.937000 0.000000 0.937000 ( 0.969000)
BigDecimal 30.594000 0.093000 30.687000 ( 30.922000)
------------------------------------ total: 31.624000sec
user system total real
Float 1.031000 0.000000 1.031000 ( 1.015000)
BigDecimal 30.453000 0.078000 30.531000 ( 30.766000)
---
That's a slowdown by a factor of 30. Which means simple decimal maths in
Ruby could be slower by at least a factor of 20 on average by my wild
guess. That's without the overhead of verifying if the result of a maths
operation can be represented in IEEE floats. Besides, even BigDecimals
lose precision eventually, and they don't have nearly as many use cases
as either arbitrary-size integers (that help prevent counter overflows),
or fast floating-point maths.
For a more "interesting" benchmark, let's see less trivial number crunchi=
ng.
---
require 'bigdecimal'
require 'bigdecimal/math'
require 'benchmark'
N =3D 100000
include BigMath
Benchmark.bmbm { |test|
test.report("Float") {
for i in 1..N
Math.log(Math:
I) / Math.log(Math::E)
end
}
test.report("BigDecimal") {
for i in 1..N
log(BigDecimal(String(Math:
I)), 20) /
log(BigDecimal(String(Math::E)), 20)
end
}
}
---
Rehearsal ----------------------------------------------
Float 0.172000 0.000000 0.172000 ( 0.203000)
BigDecimal 278.641000 0.656000 279.297000 (279.422000)
----------------------------------- total: 279.469000sec
user system total real
Float 0.218000 0.000000 0.218000 ( 0.203000)
BigDecimal 276.266000 0.625000 276.891000 (286.234000)
---
That's a slowdown by a factor of roughly 1267... I don't need profiling
to see this would kill even casual use of nontrivial amounts of
numbercrunching.
I don't care so much about performance; rather, I want to write code
quickly and elegantly using conceptual abstractions. If I really
need performance, I will profile the code and re-write the slow
parts in C.
Which turns your proposal into "Let's make Ruby behave not like C,
crippling speed, so that everyone ends up rewriting maths code in C". I
fail to see the gain. As I said, the automagical conversion is somewhere
on my list of Potential Shiny Things. I doubt it's an essential feature
for, well, anyone. And it would be actively harmful to make it default
behaviour, or ever alter the behaviour to this in a library released
into the wild.
David Vallner
--------------enig8C6C42CEB1626E5D079965E7
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (MingW32)
iD8DBQFFJ9Ooy6MhrS8astoRArHcAJ9xjZnfDk1gh8LzT6/dZuQOesUA2gCfc2GW
avQb9bvGAKx5uhnXjYQ7jRM=
=JjY4
-----END PGP SIGNATURE-----
--------------enig8C6C42CEB1626E5D079965E7--