G
Gary W. Smith
I had a couple questions about data caching. We have a site that gets
a huge amount of traffic to a few specific pages that have a lot of
data on them (300k hits/hour during peak, about 6-10 data calls per
page OR 500 SQL calls per second). Most of the data doesn't change
more than once ever 10 minutes. Basically they are data islands with
specific content. We have already implemented a singleton that
manages a connection pool. This has sped up things a big but the
database still gets a large number of hits. This is affecting
performance of other applications as the web site is pounding away
with these hits.
I was thinking about making another caching singleton that would cache
SQL statements, their data results (which are usually no more than 10
rows with about 3k of information total -- not including overhead),
the time when the statement was last run and the timeout. Keeping
these items in a synchronized Hashtable (using the appropriate locks),
it should be manageable.
My intent was to make a call to the CacheDB object, which would see if
the item exists in the cache, if not, request the data, lock the
Hashtable and insert it. For each subsequent request, it would either
find it or it wouldn't. Then, the CacheDB object would have a timer
running on callback that will iterate through the cached objects some
find the ones that are beyond their allotted lifetime, and in the
background, re-execute the SQL statement, lock the Hashtable, and
update the object.
My primary concerns are the memory usages of in memory tables
(DataTable objects), their impacts of doing a DataTable.Clone()
(possible locking and synchronization issues) and also running
multiple timer callbacks in ASP.Net.
The first question/concern:
We already run one timer in another singleton object (the connection
pooling object) so I don't know what the impact of a second timer
would be. The connection pooling timer is set of 300 seconds, and I
was thinking about the same for the caching object.
The second question/concern:
Keeping these small DataTables around in memory shouldn't require too
much in resources but we will be doing a lot of DataTable.Clone()'s.
I don't know if this is thread safe and what the memory impacts will
be.
Anyone have any experience with this?
a huge amount of traffic to a few specific pages that have a lot of
data on them (300k hits/hour during peak, about 6-10 data calls per
page OR 500 SQL calls per second). Most of the data doesn't change
more than once ever 10 minutes. Basically they are data islands with
specific content. We have already implemented a singleton that
manages a connection pool. This has sped up things a big but the
database still gets a large number of hits. This is affecting
performance of other applications as the web site is pounding away
with these hits.
I was thinking about making another caching singleton that would cache
SQL statements, their data results (which are usually no more than 10
rows with about 3k of information total -- not including overhead),
the time when the statement was last run and the timeout. Keeping
these items in a synchronized Hashtable (using the appropriate locks),
it should be manageable.
My intent was to make a call to the CacheDB object, which would see if
the item exists in the cache, if not, request the data, lock the
Hashtable and insert it. For each subsequent request, it would either
find it or it wouldn't. Then, the CacheDB object would have a timer
running on callback that will iterate through the cached objects some
find the ones that are beyond their allotted lifetime, and in the
background, re-execute the SQL statement, lock the Hashtable, and
update the object.
My primary concerns are the memory usages of in memory tables
(DataTable objects), their impacts of doing a DataTable.Clone()
(possible locking and synchronization issues) and also running
multiple timer callbacks in ASP.Net.
The first question/concern:
We already run one timer in another singleton object (the connection
pooling object) so I don't know what the impact of a second timer
would be. The connection pooling timer is set of 300 seconds, and I
was thinking about the same for the caching object.
The second question/concern:
Keeping these small DataTables around in memory shouldn't require too
much in resources but we will be doing a lot of DataTable.Clone()'s.
I don't know if this is thread safe and what the memory impacts will
be.
Anyone have any experience with this?