G
Guest
I'm trying to nail down some issues with the cache in my application.
Currently, I have an object that stands between my business logic and
database logic called CacheLogic (cute, no?). Global.asax.cs creates it in
Application_Start, initializes it and places it in the cache. During
initialization, CacheLogic retrieves data from the DB logic layer and caches
it with removal callbacks set. Whenever an object in the business logic
layer needs data, it grabs the CacheLogic object from the cache (that's a
violation of the layer right there, I know), and requests the needed data
from it. CacheLogic retrieves the data from the cache and returns it. If the
data is not in the cache, it retrieves the data from the database logic
layer, caches it (with removal callbacks), and then returns the data.
I'm concerned about what happens when two clients are requesting the same
data at the same time when that data is not in the cache. Right now, as I
understand it, CacheLogic will hit the database and cache the data twice.
I'm not sure what side effects that might have, so I'm trying to figure out
ways to avoid this situation completely without killing performance (i.e.,
lock(HttpContext.Current.Cache)).
To make CacheLogic more thread safe, I can lock on a sync object in the
class when I write to the cache or use Mutex. But what use is that unless I
also lock when I read from the cache? And isn't that just as bad as locking
the cache object itself?
The other issue I have is that CacheLogic stores objects in the cache with
removal callbacks pointing to methods in the CacheLogic object. If I create
CacheLogic objects willy-nilly, I could end up with multiple objects in
memory, referenced only by the callback in some cache object. Sounds too
close to a memory leak for me. Currently, by storing a single CacheLogic
object in the cache itself, I have one single object that handles caching and
callbacks for all client connections. I'm not happy that in order to get the
CacheLogic object, other objects have to violate the layers and access the
cache directly.
Do I even need to worry about keeping a single CacheLogic object? Is
converting CacheLogic into a singleton (is there an ISingleton interface?) a
good solution to this problem? And, if I do, once I use the CacheLogic
singleton in Application_Start (assuming I code it properly), will it be
truly available to all clients who call CacheLogic.GetInstance()?
Currently, I have an object that stands between my business logic and
database logic called CacheLogic (cute, no?). Global.asax.cs creates it in
Application_Start, initializes it and places it in the cache. During
initialization, CacheLogic retrieves data from the DB logic layer and caches
it with removal callbacks set. Whenever an object in the business logic
layer needs data, it grabs the CacheLogic object from the cache (that's a
violation of the layer right there, I know), and requests the needed data
from it. CacheLogic retrieves the data from the cache and returns it. If the
data is not in the cache, it retrieves the data from the database logic
layer, caches it (with removal callbacks), and then returns the data.
I'm concerned about what happens when two clients are requesting the same
data at the same time when that data is not in the cache. Right now, as I
understand it, CacheLogic will hit the database and cache the data twice.
I'm not sure what side effects that might have, so I'm trying to figure out
ways to avoid this situation completely without killing performance (i.e.,
lock(HttpContext.Current.Cache)).
To make CacheLogic more thread safe, I can lock on a sync object in the
class when I write to the cache or use Mutex. But what use is that unless I
also lock when I read from the cache? And isn't that just as bad as locking
the cache object itself?
The other issue I have is that CacheLogic stores objects in the cache with
removal callbacks pointing to methods in the CacheLogic object. If I create
CacheLogic objects willy-nilly, I could end up with multiple objects in
memory, referenced only by the callback in some cache object. Sounds too
close to a memory leak for me. Currently, by storing a single CacheLogic
object in the cache itself, I have one single object that handles caching and
callbacks for all client connections. I'm not happy that in order to get the
CacheLogic object, other objects have to violate the layers and access the
cache directly.
Do I even need to worry about keeping a single CacheLogic object? Is
converting CacheLogic into a singleton (is there an ISingleton interface?) a
good solution to this problem? And, if I do, once I use the CacheLogic
singleton in Application_Start (assuming I code it properly), will it be
truly available to all clients who call CacheLogic.GetInstance()?