I have a Hashtable in-memory and want to sync updates to the Hashtable
to disk. There may be frequent updates to the Hashtable and I want to
avoid constant small update disk writes. Has anyone got any idea how to
do this?
Loads.
How do you want to store the hashtable?
Let's assume serialisation. Not tested, and obviously not ready for real
use:
public class MapDumper {
public static <K, V> Map<K, V> makeDumpingMap(Map<K, V> m, File file, long interval) {
Serializable s = (Serializable)m;
Map<K, V> sm = Collections.synchronizedMap(m);
new PeriodicDumper(s, sm, file, interval).start();
return sm;
}
}
public class PeriodicDumper implements Runnable {
private final Serializable obj;
private final Object lock;
private final File file;
private final long interval;
private volatile Thread t;
public PeriodicDumper(Serializable obj, Object lock, File file, long interval) {
this.obj = obj;
this.lock = lock;
this.file = file;
this.interval = interval;
}
public void run() {
while (t != null) {
try {
Thread.sleep(interval);
} catch (InterruptedException e) {
// just treat an interrupt as an early exit from the sleep
}
try {
dump();
} catch (IOException e) {
// do something
}
}
}
public void dump() throws IOException {
// go via a buffer to avoid doing IO while holding the lock
ByteArrayOutputStream buf = new ByteArrayOutputStream();
ObjectOutputStream oout = new ObjectOutputStream(buf);
synchronized (lock) {
oout.writeObject(obj);
oout.close();
}
OutputStream fout = new FileOutputStream(file);
try {
buf.writeTo(fout);
}
finally {
fout.close();
}
}
public void start() {
synchronized (this) {
if (t == null) {
t = new Thread(this);
}
t.setDaemon(true);
t.start();
}
}
public void stop() {
synchronized (this) {
if (t != null) {
Thread t = this.t;
this.t = null;
t.interrupt();
}
}
}
}
Also, if you could get access to the magic cookie inside the map used to
detect concurrent modifications, you could easily skip dumps when no
change has occurred.
You should do the dump a bit more cleverly than this, too, so you're never
in a state where the data on disk is incomplete. Dump to a second file,
then atomically rename over the first.
tom