Digital signing of Ruby scripts

J

John Lam

------=_Part_141_7077278.1143681705818
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

A primary scenario for my RubyCLR bridge is to enable folks to build rich
client applications on top of the .NET libraries. One potential blocking
issue is dealing with users tampering with .rb scripts on the client. I was
wondering if folks have spent some time thinking about how to package up
Ruby applications and digitally signing them.

The Monad shell team (the next generation Windows shell which uses an objec=
t
piping metaphor as opposed to the more traditional text piping metaphor in
*nix shells) already has a code signing policy in place for Monad scripts,
as well as administrator configurable policies for script execution.

Any and all thoughts around this would be greatly appreciated.

Thanks,
-John
http://www.iunknown.com

------=_Part_141_7077278.1143681705818--
 
L

listrecv

Are you trying to address security concerns or copy protection /
digital rights?

In terms of copy protection, I see the issue as irrelevant - even
without the ruby bridge, anyone can do whatever they want with the .NET
assemblies (especially since they're so easy to disassemble).

In terms of security, how is this different from the security of a
compiled program? The two standard methods used to allow users to run
them securely are either a) trust of the author, often combined with
code signing or b) running in a sandbox. Both should work equally well
for Ruby, even with full source access.
 
J

John Lam

------=_Part_245_15565996.1143697873956
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It's not copy protection that I'm worried about. Nor is it someone being
able to look at the source code. What I'm worried about is someone
*tampering* with the source code. So what I'm interested in is code signing
of Ruby scripts combined with a policy enforcement mechanism (e.g. only an
admin can install the Ruby interpreter, which is signed and only an admin
can define the execution policy of the Ruby interpreter which can say thing=
s
like "run all scripts" to "run only scripts whose public keys are defined b=
y
the admin").

Now, maybe rich client applications built using Ruby will be more like web
pages - the real business logic lives on the server with only lightweight
validation logic on the client. However, it would be a shame to limit Ruby
apps to just that.

-John
http://www.iunknown.com

Are you trying to address security concerns or copy protection /
digital rights?

In terms of copy protection, I see the issue as irrelevant - even
without the ruby bridge, anyone can do whatever they want with the .NET
assemblies (especially since they're so easy to disassemble).

In terms of security, how is this different from the security of a
compiled program? The two standard methods used to allow users to run
them securely are either a) trust of the author, often combined with
code signing or b) running in a sandbox. Both should work equally well
for Ruby, even with full source access.

------=_Part_245_15565996.1143697873956--
 
D

Dan Fitzpatrick

John said:
It's not copy protection that I'm worried about. Nor is it someone being
able to look at the source code. What I'm worried about is someone
*tampering* with the source code. So what I'm interested in is code signing
of Ruby scripts combined with a policy enforcement mechanism (e.g. only an
admin can install the Ruby interpreter, which is signed and only an admin
can define the execution policy of the Ruby interpreter which can say things
like "run all scripts" to "run only scripts whose public keys are defined by
the admin").

Now, maybe rich client applications built using Ruby will be more like web
pages - the real business logic lives on the server with only lightweight
validation logic on the client. However, it would be a shame to limit Ruby
apps to just that.

-John
http://www.iunknown.com
John,

One solution may be to compile a small app that takes an MD5, SHA, or
some other checksum of the ruby code and only executes it if it is in an
internal hash of allowed files. You could have user-based hashes of
allowed files based on who is logged in. Of course you will have to
rebuild this app every time you change the ruby code but that could be
automated. But a user could run the ruby code directly unless you build
in some dependency to the compiled app. If they can see the source code,
they can copy it, tamper with it, and run it.

Dan
 
E

Eric Hodel

A primary scenario for my RubyCLR bridge is to enable folks to
build rich
client applications on top of the .NET libraries. One potential
blocking
issue is dealing with users tampering with .rb scripts on the
client. I was
wondering if folks have spent some time thinking about how to
package up
Ruby applications and digitally signing them.

The Monad shell team (the next generation Windows shell which uses
an object
piping metaphor as opposed to the more traditional text piping
metaphor in
*nix shells) already has a code signing policy in place for Monad
scripts,
as well as administrator configurable policies for script execution.

Any and all thoughts around this would be greatly appreciated.

Rubygems now has the ability to let you sign your gems...

I'm not sure how it is implemented, but it might be what you're
looking for.
 
L

listrecv

I still don't understand. Who are you trying to protect - a user from
running a malicous (or tampered with) ruby script? If so, as I said,
this is no different than protecting a user from running a trojan
compiled file - people either trust the author (and hopefully use code
signing), or run the code in a sandbox.

In terms of ensuring that only admin's can install the ruby
executable/interpreter - this is currently impossible, and likely will
remain so. Even if you mark your exe/interpreter to require admin
privs to install, what's to stop anyone else from creating their own
exe/interpreter without that restriction? It's essentially the old
copy protection / DRM issue, which all experts agree can always be
defeated (at least short of a hardware implementation).
 
J

John Lam

------=_Part_14738_30638577.1143779004216
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It's actually the other way around - can the author of the program trust th=
e
user of the program? Think about a corporate environment where you're
worried about employees hacking your system. In today's SOX compliance
driven world it's not an unreasonable thing to worry about.

DRM can be used for "good" or "evil". In a corporate setting, the user
doesn't own the computer - it's the company's property. So in that case, th=
e
company should be able to define what can and cannot execute on the machine=
 
L

listrecv

I have no ethical problem with DRM. I'm simply coming from a
mathematical / techinical perspective.

Some of the greatest minds have tried working on it, and the conclusion
is uaninimous: you can make it more annoying or cumbersome for someone
to duplicate or modify the software, but you can't make it impossible.

Again, you can put whatever limitations you want into your interpreter
- but I can always modify the binary (google IDA Pro) or create my own.
 
T

Tanner Burson

------=_Part_18759_21154976.1143819094748
Content-Type: text/plain; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

It's actually the other way around - can the author of the program trust
the
user of the program? Think about a corporate environment where you're
worried about employees hacking your system. In today's SOX compliance
driven world it's not an unreasonable thing to worry about.

DRM can be used for "good" or "evil". In a corporate setting, the user
doesn't own the computer - it's the company's property. So in that case,
the
company should be able to define what can and cannot execute on the
machine.
So while *today* this isn't a reasonable expectation, in the future havin= g
the ability to lock down a machine so that it only executes code that was
signed by an approved list of certificate holders seems like a really goo= d
way to avoid problems like trusted insiders hacking your system.


Given that you're bridging to .NET could you use that to your advantage? D=
o
some sort of a hash, or simple signature of the ruby code, which gets passe=
d
through the bridge, and then let .NET handle that part. Hmm...more random
using here...

What if you took the ruby code, and compiled it into a .NET exe as an
embedded resource (far from secure, but it would allow you to use strong
name keys or something similar on the entire assembly) then use a generic
Main function that either embeds a ruby interpreter, or 'forks' one out to
call the code? I realize this isn't much different than exerb or the like,
but being in a .NET assembly could allow you the strong naming and such.

-John


--
=3D=3D=3DTanner Burson=3D=3D=3D
(e-mail address removed)
http://tannerburson.com <---Might even work one day...

------=_Part_18759_21154976.1143819094748--
 
P

Patrick Hurley

Given that you're bridging to .NET could you use that to your advantage? = Do
some sort of a hash, or simple signature of the ruby code, which gets pas= sed
through the bridge, and then let .NET handle that part. Hmm...more rando= m
using here...

What if you took the ruby code, and compiled it into a .NET exe as an
embedded resource (far from secure, but it would allow you to use strong
name keys or something similar on the entire assembly) then use a generic
Main function that either embeds a ruby interpreter, or 'forks' one out t= o
call the code? I realize this isn't much different than exerb or the like= ,
but being in a .NET assembly could allow you the strong naming and such.

-John

The idea I am bounching around is to build a version of the ruby
interpreter that has an embedded public key. Then all ruby code would
in a comment header/footer have a signature that was generated with
the private key.

Keeping people from loading their own version of ruby remains a
problem, but this would remove the code insertion into the valid ruby
interpreter issue.

Is this what you have in mind? This is not a bad idea -- I can see
where it could be necessary/worthwhile in some client code situations.
Not trying to hide the code, but to make tampering with the code
increasingly difficult. Of course if system security on the machine is
compromised (to the extent someone can change a file which they should
not have permission), then it is likely that this wasted effort.

Validating that a particular client installed version of ruby is the
correct one is (IMHO) an impossible task as you would have to encode a
"secret" into the interpreter (which could then be decompiled and made
unsecret) or have some hardware level support which does not currently
exist on PC platforms.

pth
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,995
Messages
2,570,226
Members
46,815
Latest member
treekmostly22

Latest Threads

Top