ANN: Swiftiply 0.5.0

K

khaines

I'd like to announce the first public release of Swiftcore Swiftiply, a
clustering proxy for web applications written in Ruby.


Kirk Haines

-----

Swiftiply v. 0.5.0 (http://swiftiply.swiftcore.org)

Swiftiply is a backend agnostic clustering proxy for web applications that is
specifically designed to support HTTP traffic from web frameworks. Unlike Pen
(http://siag.nu/pen/), Swiftiply is not intended as a general purpose load
balancer for tcp protocols and unlike HAProxy (http://haproxy.1wt.eu/), it is
not a highly configurable general purpose proxy overflowing with features.

What it is, though, is a very fast, narrowly targetted clustering proxy.
In back to back comparisons of Swiftiply to HAProxy, Swiftiply reliably
outperforms HAProxy (tested using IOWA, Rails, Merb, and Ramaze backend
processes running Mongrel).

Swiftiply works differently from a traditional proxy. In Swiftiply, the
backend processes are clients of the Swiftiply server -- they make persistent
socket connections to Swiftiply. One of the major advantages to this
architecture is that it allows one to start or stop backend processes at will,
with no configuration of the proxy. The obvious disadvantage is that this is
not behavior that backends typically expect.

Because Mongrel is the preferred deployment method for most Ruby frameworks,
Swiftiply includes a version of Mongrel (found in swiftcore/swiftiplied_mongrel.rb)
that has been modified to work as a swiftiply client. This should be
transparent to any existing Mongrel handlers, allowing them all to with
Swiftiply.

In addition, as an offshoot of the swiftiplied_mongrel, there is a second
version that is available. This other version is found in
swiftcore/evented_mongrel.rb; it is a version of Mongrel that has its network
traffic handled by EventMachine, creating a Mongrel that runs in an event
based mode instead of a threaded mode. For many applications, running in an
event based mode will give better throughput than running in a threaded mode,
especially when there are concurrent requests coming in.

This is because the event based operation handles requests efficiently, on
a first come, first served basis, without the overhead of threads. For the
typical Rails application, this means that request handling may be slightly
faster than the threaded Mongrel for single, non-concurrent requests. When
there are concurrent requests, though, the differential increases quickly.


FRAMEWORK SUPPORT


Swiftcore IOWA

IOWA has built in support for running in evented and clustered modes.


Rails

Swiftiply provides a _REPLACEMENT_ to mongrel_rails that, throught the use
of an environment variable, can be told to run in either the evented mode or
the swiftiplied mode.

To run a Rails app in evented mode, set the EVENT environment variable. On
a unixlike system:

env EVENT=1 mongrel_rails

will do it.

To run in swiftiplied mode:

env SWIFTIPLY=1 mongrel_rails

Because Swiftiply backends connect to the Swiftiply server, they all connect
on the same port. This is important. Each of the backends runs against the
same port. To make it easier to start multiple Rails backends, a helper
script, swiftiply_mongrel_rails, is provided. It is just a light wrapper
around mongrel_rails that will let one start N backends, with proper pid
files, and stop them.


Merb

The merb source (trunk only, at this point), has Swiftiply support that works
just like the Rails support, built in.


Ramaze

A couple adapters for Ramaze are included, to allow Ramaze to run with either
the evented or the swiftiplied mongrels. They are installed into

ramaze/adapter/evented_mongrel.rb
ramaze/adapter/swiftiplied_mongrel.rb


Other Frameworks

Swiftiply has been tested with Camping and Nitro, as well. Direct support for
them is not yet bundled, but will be in an upcoming release.


CONFIGURATION

Swiftiply takes a single configuration file which defines for it where it
should listen for incoming connections, whether it should daemonize itself,
and then provides a map of incoming domain names and the address/port to
proxy that traffic to. That outgoing address/port is where the backends for
that site will connect to.

Here's an example:

cluster_address: swiftcore.org
cluster_port: 80
daemonize: true
map:
- incoming:
- swiftcore.org
- www.swiftcore.org
outgoing: 127.0.0.1:30000
default: true
- incoming: iowa.swiftcore.org
outgoing: 127.0.0.1:30010
- incoming: analogger.swiftcore.org
outgoing: 127.0.0.1:30020
- incoming:
- swiftiply.com
- www.swiftiply.com
- swiftiply.swiftcore.org
outgoing: 127.0.0.1:30030
 
R

Rick DeNatale

I'd like to announce the first public release of Swiftcore Swiftiply, a
clustering proxy for web applications written in Ruby.

This looks promising, but I don't seem to see a way to use
mongrel_cluster instead of just plain mongrel[_rails]

Currently I use pen in front of a mongrel cluster. I'd love to see an
easy way to host multiple back-end clusters for multiple apps under
different virtual hosts or root url's.

This looks like it nicely handles the multiple apps case but, unless I
miss it, not multiple instances per app.
 
K

khaines

This looks promising, but I don't seem to see a way to use
mongrel_cluster instead of just plain mongrel[_rails]

Currently I use pen in front of a mongrel cluster. I'd love to see an
easy way to host multiple back-end clusters for multiple apps under
different virtual hosts or root url's.

This looks like it nicely handles the multiple apps case but, unless I
miss it, not multiple instances per app.

By multiple instances per app, I assume that you mean multiple backend
processes, right?

That's the whole point. They all connect to the SAME address/port,
because the instances are clients of swiftiply instead of being standalone
servers.

So, for example, if you have the following config section:

map:
- incoming: planner.walrusinc.com
outgoing: frontend.walrusinc.com:11111
- incoming: blog.walrusinc.com
outgoing: frontend.walrusinc.com:11112

Then Swiftiply proxies requests for planner.walrusinc.com to the backends
connected to frontend.walrusinc.com:11111, and blog.walrusinc.com to the
backends connected to frontend.walrusinc.com:11112.

Because the backends are clients of Swiftiply, you have have however many
of them that you need all connected to the same point. So
blog.walrusinc.com is running with 2 backends, but walrusinc's thoughts
become popular, so they spin up a second machine with a couple more
backends on it. All of the backends connect to
frontend.walrusinc.com:11112.

swiftiply_mongrel_rails will start N mongrel_rails processes, all on the
same address/port, with a pid file for each (since the default pid file is
named by port with mongrel_rails, that won't work when all of the
processes are connected to the same port).

However, I am sure that better integration with mongrel_cluster is needed.
More complete support for all Ruby frameworks is my primary goal for the
next release.



Kirk Haines
 
R

Rick DeNatale

By multiple instances per app, I assume that you mean multiple backend
processes, right?

That's the whole point. They all connect to the SAME address/port,
because the instances are clients of swiftiply instead of being standalone
servers.

So, for example, if you have the following config section:

map:
- incoming: planner.walrusinc.com
outgoing: frontend.walrusinc.com:11111
- incoming: blog.walrusinc.com
outgoing: frontend.walrusinc.com:11112

Then Swiftiply proxies requests for planner.walrusinc.com to the backends
connected to frontend.walrusinc.com:11111, and blog.walrusinc.com to the
backends connected to frontend.walrusinc.com:11112.

Because the backends are clients of Swiftiply, you have have however many
of them that you need all connected to the same point. So
blog.walrusinc.com is running with 2 backends, but walrusinc's thoughts
become popular, so they spin up a second machine with a couple more
backends on it. All of the backends connect to
frontend.walrusinc.com:11112.

swiftiply_mongrel_rails will start N mongrel_rails processes, all on the
same address/port, with a pid file for each (since the default pid file is
named by port with mongrel_rails, that won't work when all of the
processes are connected to the same port).

Okay, I missed that, so swiftipy_mongrel_rails is really a substitute
for mongrel_cluster.

I guess that you can also put this behind say Apache by setting the
cluster_port to something other than 80 and using mod_proxy.
However, I am sure that better integration with mongrel_cluster is needed.
More complete support for all Ruby frameworks is my primary goal for the
next release.

I guess that one concern is tracking changes to the base mongrel.

Another would be how this interacts with deployment tools like
Capistrano and Deprec.
 
K

khaines

Okay, I missed that, so swiftipy_mongrel_rails is really a substitute
for mongrel_cluster.

Maybe? It was contributed by Ezra Z. and really just represents a
starting place for figuring out the best way to support Rails users.
I guess that you can also put this behind say Apache by setting the
cluster_port to something other than 80 and using mod_proxy.

Yep. In the documentation I used port 80 because one of the ways that I
am using this with IOWA is without any other web server in front of it.
I guess that one concern is tracking changes to the base mongrel.

(*nod*) The changes are all intended to be transparent to a mongrel
handler, so the handlers don't know nor care whether they are running in a
threaded mongrel, in an evented mongrel, or in a swiftiplied mongrel.

It just dawned on me that I did not do this in the gem I released last
night, but I inteded to insert Mongrel 1.0.1 as a dependency, and will
then keep the codebase current to the latest stable Mongrel release as
time moves forward.
Another would be how this interacts with deployment tools like
Capistrano and Deprec.

I wouldn't think there is anything special needed for interaction with a
deployment tool. Can you expand on your concern there?


Thanks

Kirk Haines
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Members online

No members online now.

Forum statistics

Threads
473,983
Messages
2,570,187
Members
46,747
Latest member
jojoBizaroo

Latest Threads

Top