T
thinkfr33ly
We're implementing a "Google Adwords"-like service for an affiliate
program for one of our ASP.NET sites. (Google Adwords is the
advertising program where affiliates place iFrames on their site and
pass keywords to Google via the SRC attribute of the iFrame. Google
then uses those keywords to generate applicable results for the
contents of the iFrame.)
Our catalog will contain between 7,000 and 15,000 files that each
corrispond to a piece of content we offer. With the results we get back
from Index Server we can piece together the URLs/content for the
iFrame.
I'm currently using an HTTP Handler and the OleDB provider for Index
Server, along with a simple ExecuteReader command.
of RAM Win2k3 web servers. This, however, pegs the CPUs. So,
realistically, we're looking at more like 350 to 450 RPS.
This isn't horrible, but I was hoping for more. The first thought I had
was to implement an output caching solution, but because we don't
control the input of keywords we would be creating a scenario where we
might be caching far too many combinations of keywords and essentially
opening ourselves up to a DoS attack. (Somebody could easily craft a
script that submitted millions of random keywords to our application,
thereby quickly filling up the memory cache.)
Does anybody have any suggestions for improving performance of a Index
Server based searching scenario like ours?
Thanks,
RMD
program for one of our ASP.NET sites. (Google Adwords is the
advertising program where affiliates place iFrames on their site and
pass keywords to Google via the SRC attribute of the iFrame. Google
then uses those keywords to generate applicable results for the
contents of the iFrame.)
Our catalog will contain between 7,000 and 15,000 files that each
corrispond to a piece of content we offer. With the results we get back
from Index Server we can piece together the URLs/content for the
iFrame.
I'm currently using an HTTP Handler and the OleDB provider for Index
Server, along with a simple ExecuteReader command.
probably do around 750 RPS on each of our dual Zeon (3.2 Ghz) / 1.5 GBFrom my preliminary performance testing, it appears that we can
of RAM Win2k3 web servers. This, however, pegs the CPUs. So,
realistically, we're looking at more like 350 to 450 RPS.
This isn't horrible, but I was hoping for more. The first thought I had
was to implement an output caching solution, but because we don't
control the input of keywords we would be creating a scenario where we
might be caching far too many combinations of keywords and essentially
opening ourselves up to a DoS attack. (Somebody could easily craft a
script that submitted millions of random keywords to our application,
thereby quickly filling up the memory cache.)
Does anybody have any suggestions for improving performance of a Index
Server based searching scenario like ours?
Thanks,
RMD