S
Scott Sauyet
I am beginning to think that the example was not such a good idea...[ ... ]
yes it's flawed. It is an EXAMPLE, not proof of concept. You can do
better if you wish.
But the point is that this was an attempt at a simple example. If we
can't get a simple case to work, how likely is it that the more
complicated ones will be workable? Let's take it simpler still. How
about if our mangler had a rule to turn this:
function f(x) {
return x + 1;
}
into this:
function f_prime(x) {
var y = x;
return y + 1;
}
Except noting possible boundary issues, it's pretty clear that we've
maintained the correspondence between f and f_prime. But now what if
we tried it on this:
function g(x) {
eval("var y = y ? y + x : x")
return x + 1
}
to get this:
function g_prime(x) {
var y = x;
eval("var y = y ? y + x : x")
return y + 1
}
Then g(20) = 21, but g_prime(20) = 41.
And of course we could rig it so that the "y" is not easily visible
inside the eval statement. The point is that this really is
difficult.
- of course it is possible to do it, at least to some extent (still
not buying the Rice's theorem having to do anything with this - as
Lasse pointed out, compilers do that all the time)
I'm certainly not buying Thomas Lahn's assertion that Rice's theorm
implies that
| [Y]ou cannot devise a general function that computes another
function
| that computes the same result for the same input (a trivial
property)
| as a third function.
I think the Y-Combinator [1] demonstrates that this is false.
But I also think Lasse Nielsen is overstating when he says that this
is so similar to what a compiler does. A compiler is doing something
akin to transcription. I think your mangler would have to do
something closer to translation. While I have read Hofstadter [2] and
understand something about how difficult it might be to draw the line
between the two, I think it makes a difference in how easy it would be
to create an algorithm for one versus the other.
Well, that might be true. But not because of all the things others
have written in these posts, but because JS is such a difficult
language for this task. I am no way an expert in JS, but from what I
have seen (closures come to mind) there are some concepts that make it
really hard to do this kind of thing. Not impossible, but difficult.
Actually, I think closures would be one of the best tools to use to
accomplish some obfuscation.
function f_prime2 = (function(test) {
var y = test ? 0 : 1;
return function(x) {
return x + y;
}
})(false);
This is already fairly obfuscated from the original function, and I
can clearly imagine additional layers. But I don't see that as likely
to hide much from even a mildly persistent snoop.
I think the worst problem would come with "eval" and its cousins.
Not sure it would be a bad business idea though - seeing how
obfuscators sell for $60-$250 when they are not even remotely
successful. If someone is capable of doing it, of course. I would have
bought a copy right now, but of course, it would have to be so good
that a capable programmer would have difficulties deducting what the
code does.
I think the best assessment in this thread was in Lasse Nielsen's
first response where he discussed the factors to be considered in
approaching this type of tool and concluding:
| I personally think any informed evaluation of that kind would
| end in rejecting the idea.
It's not that something couldn't be built. Clearly someone could
build a tool that would accomplish much of what you suggest. But as
the inputs got more complicated, it's more and more likely that the
tool would actually break working code, and of course it would be by
design much more difficult to debug the issues that arose.
-- Scott
[1] http://en.wikipedia.org/wiki/Fixed_point_combinator
[2] http://en.wikipedia.org/wiki/Metamagical_Themas