D
Daniel
I'm writing an application that (among other things) evaluates
mathematical expressions. The user enters strings containing literals
and names that later get evaluated using the Python interpreter. Here's
a short (very simplified) example:
Traceback (most recent call last):
...
TypeError: You can interact Decimal only with int, long or Decimal data
types.
I understand why I got the error, so there's no need to explain that.
It is a requirement that the 'names' dict contains Decimal values. And
of course it's unacceptable to expect my users to enter Decimal('...')
every time they enter a non-integer number. My initial solutioin is to
use a regular expression to wrap each float value with Decimal('...')
before the expression is evaluated. But I don't like that solution for
two reasons:
1. It seems error prone and inelegant. Paraphrase: if you've got a
problem and you think "Ahh, I'll use regular expressions..." now you've
got two problems.
2. Error reporting is not as intuitive (I'm using the Python
interpreter and therefore my users see Python exceptions when their
expressions don't evaluate). After the expressions have been shot up
with all the extra Decimal junk to make them evaluate correctly they
are not nearly as recognizable (or easy to read) and the user is likely
to think "but I didn't even write that expression...where is that
Decimal('...') stuff coming from?"
Ideally I'd like to have a way to tell the interpreter to use Decimal
by default instead of float (but only in the eval() calls). I
understand the performance implications and they are of no concern. I'm
also willing to define a single global Decimal context for the
expressions (not sure if that matters or not). Is there a way to do
what I want without rolling my own parser and/or interpreter? Is there
some other alternative that would solve my problem?
Thanks,
~ Daniel
mathematical expressions. The user enters strings containing literals
and names that later get evaluated using the Python interpreter. Here's
a short (very simplified) example:
Traceback (most recent call last):
...
TypeError: You can interact Decimal only with int, long or Decimal data
types.
I understand why I got the error, so there's no need to explain that.
It is a requirement that the 'names' dict contains Decimal values. And
of course it's unacceptable to expect my users to enter Decimal('...')
every time they enter a non-integer number. My initial solutioin is to
use a regular expression to wrap each float value with Decimal('...')
before the expression is evaluated. But I don't like that solution for
two reasons:
1. It seems error prone and inelegant. Paraphrase: if you've got a
problem and you think "Ahh, I'll use regular expressions..." now you've
got two problems.
2. Error reporting is not as intuitive (I'm using the Python
interpreter and therefore my users see Python exceptions when their
expressions don't evaluate). After the expressions have been shot up
with all the extra Decimal junk to make them evaluate correctly they
are not nearly as recognizable (or easy to read) and the user is likely
to think "but I didn't even write that expression...where is that
Decimal('...') stuff coming from?"
Ideally I'd like to have a way to tell the interpreter to use Decimal
by default instead of float (but only in the eval() calls). I
understand the performance implications and they are of no concern. I'm
also willing to define a single global Decimal context for the
expressions (not sure if that matters or not). Is there a way to do
what I want without rolling my own parser and/or interpreter? Is there
some other alternative that would solve my problem?
Thanks,
~ Daniel